Modifier and Type | Method and Description |
---|---|
String |
SiteStore.getInternalWorkDirectory(Job job)
This determines the working directory on remote execution pool for a
particular job.
|
String |
SiteStore.getInternalWorkDirectory(Job job,
boolean onStagingSite)
This determines the working directory on remote execution pool or a staging
site for a particular job.
|
Modifier and Type | Method and Description |
---|---|
String |
PoolInfoProvider.getExecPoolWorkDir(Job job)
This determines the working directory on remote execution pool for a
particular job.
|
Modifier and Type | Class and Description |
---|---|
class |
AggregatedJob
This class holds all the specifics of an aggregated job.
|
class |
DAGJob
This is a data class that stores the contents of the DAG job in a DAX conforming
to schema 3.0 or higher.
|
class |
DAXJob
This is a data class that stores the contents of the DAX job in a DAX conforming
to schema 3.0 or higher.
|
class |
TransferJob
This is a data class that stores the contents of the transfer job that
transfers the data.
|
Modifier and Type | Method and Description |
---|---|
Job |
AggregatedJob.getConstituentJob(int index)
Returns a job from a particular position in the list of constituent jobs
|
Modifier and Type | Method and Description |
---|---|
Iterator<Job> |
AggregatedJob.constituentJobsIterator()
Returns an iterator to the constituent jobs of the AggregatedJob.
|
Modifier and Type | Method and Description |
---|---|
void |
ADag.add(Job job)
This adds a new job to the ADAG object.
|
void |
AggregatedJob.add(Job job)
Adds a job to the aggregated job.
|
void |
WorkflowMetrics.decrement(Job job)
Decrement the metrics when on the basis of type of job being removed
|
private void |
WorkflowMetrics.decrementJobMetrics(Job job)
Decrement the metrics when on the basis of type of job.
|
private void |
WorkflowMetrics.decrementTaskMetrics(Job job)
Decrement the task metrics when on the basis of type of job.
|
void |
WorkflowMetrics.increment(Job job)
Increment the metrics when on the basis of type of job.
|
private void |
WorkflowMetrics.incrementJobMetrics(Job job)
Increment the metrics when on the basis of type of job.
|
private void |
WorkflowMetrics.incrementTaskMetrics(Job job)
Increment the metrics when on the basis of type of job.
|
void |
Job.mergeProfiles(Job job)
Merges profiles from another job to this job in a controlled fashion.
|
boolean |
ADag.remove(Job job)
Removes a particular job from the workflow.
|
Constructor and Description |
---|
AggregatedJob(Job job,
int num)
The overloaded constructor.
|
DAGJob(Job job)
The overloaded construct that constructs a DAG job by wrapping around
the
Job job. |
DAXJob(Job job)
The overloaded construct that constructs a DAX job by wrapping around
the
Job job. |
Job(Job job)
Overloaded constructor.
|
TransferJob(Job job)
The overloaded construct that constructs a GRMS job by wrapping around
the
Job job. |
Modifier and Type | Method and Description |
---|---|
protected Job |
CPlanner.createNoOPJob(String name)
It creates a NoOP job that runs on the submit host.
|
Modifier and Type | Method and Description |
---|---|
protected void |
CPlanner.construct(Job job,
String key,
String value)
Constructs a condor variable in the condor profile namespace
associated with the job.
|
Modifier and Type | Field and Description |
---|---|
protected Map<String,Job> |
Abstract.mPartitionClusterMap
A Map that indexes the partition ID to the clustered job.
|
protected Map<String,Job> |
Abstract.mSubInfoMap
A Map to store all the job(Job) objects indexed by their logical ID found in
the dax.
|
Modifier and Type | Method and Description |
---|---|
protected Job |
Abstract.clusteredJob(Partition p)
Returns the job corresponding to a partition.
|
protected Job |
Abstract.clusteredJob(String id)
Returns the job corresponding to a partition.
|
protected Job |
Abstract.getJob(String id)
Returns the job object corresponding to the id of the job.
|
Modifier and Type | Method and Description |
---|---|
private List<List<Job>> |
Horizontal.bestFitBinPack(List<Job> jobs,
double maxTime)
Perform best fit bin packing.
|
private List<List<Job>> |
Horizontal.bestFitBinPack(List<Job> jobs,
int maxBins)
Perform best fit bin packing.
|
private Comparator<Job> |
Horizontal.getBinPackingComparator()
The comparator is used to sort a collection of jobs in decreasing order
of their run times as specified by the Pegasus.JOB_RUN_TIME property.
|
Modifier and Type | Method and Description |
---|---|
protected void |
Abstract.addJob(Job job)
Adds jobs to the internal map of jobs that is maintained by the clusterer.
|
protected void |
Abstract.associate(Partition p,
Job job)
Maps the partition to the corresponding clustered job.
|
int[] |
Horizontal.getCollapseFactor(String pool,
Job job,
int size)
Returns the collapse factor, that is used to chunk up the jobs of a
particular type on a pool.
|
private String |
Horizontal.getRunTime(Job job) |
private void |
Horizontal.updateReplacementTable(List jobs,
Job mergedJob)
Updates the replacement table.
|
Modifier and Type | Method and Description |
---|---|
private List<List<Job>> |
Horizontal.bestFitBinPack(List<Job> jobs,
double maxTime)
Perform best fit bin packing.
|
private List<List<Job>> |
Horizontal.bestFitBinPack(List<Job> jobs,
int maxBins)
Perform best fit bin packing.
|
abstract void |
Abstract.determineInputOutputFiles(AggregatedJob job,
List<Job> orderedJobs)
Determine the input and output files of the job on the basis of the
order of the constituent jobs in the AggregatedJob.
|
void |
Vertical.determineInputOutputFiles(AggregatedJob job,
List<Job> orderedJobs)
Determine the input and output files of the job on the basis of the
order of the constituent jobs in the AggregatedJob.
|
protected String |
Abstract.getLogicalNameForJobs(List<Job> jobs)
Returns the logical names for the jobs.
|
protected String |
Vertical.getLogicalNameForJobs(List<Job> jobs)
Returns null as for label based clustering we dont want the transformation
name to be considered for constructing the name of the clustered jobs
|
Modifier and Type | Method and Description |
---|---|
protected String |
Abstract.getCommentString(Job job,
int taskid)
Generates the comment string for the job .
|
String |
MPIExec.getCPURequirementsArgument(Job job)
Looks at the profile keys associated with the job to generate the argument
string fragment containing the cpu required for the job.
|
String |
MPIExec.getExtraArguments(Job job)
Looks at the profile key for pegasus::pmc_task_arguments to determine if extra
arguments are required for the task.
|
String |
MPIExec.getMemoryRequirementsArgument(Job job)
Looks at the profile keys associated with the job to generate the argument
string fragment containing the memory required for the job.
|
String |
MPIExec.getPriorityArgument(Job job)
Looks at the profile keys associated with the job to generate the argument
string fragment containing the priority to be associated for the job.
|
protected TransformationCatalogEntry |
Abstract.getTCEntry(Job job)
Helper method to get an entry from the transformation catalog for an
installed executable.
|
JobAggregator |
JobAggregatorInstanceFactory.loadInstance(Job job)
Returns the appropriate handle to the JobAggregator that is to be used
for a particular type of job.
|
Modifier and Type | Method and Description |
---|---|
boolean |
POSTScript.construct(Job job,
String key)
Constructs the postscript that has to be invoked on the submit host
after the job has executed on the remote end.
|
boolean |
GridStart.enable(Job job,
boolean isGlobusJob)
Enables a job to run on the grid.
|
void |
CodeGenerator.generateCode(ADag dag,
Job job)
Generates the code for a single job in the input format of the workflow
executor being used.
|
String |
GridStart.getWorkerNodeDirectory(Job job)
Returns the directory in which the job executes on the worker node.
|
GridStart |
GridStartFactory.loadGridStart(Job job,
String gridStartPath)
Loads the appropriate gridstart implementation for a job on the basis of
the value of the GRIDSTART_KEY in the Pegasus namepsace.
|
POSTScript |
GridStartFactory.loadPOSTScript(Job job,
GridStart gridStart)
Loads the appropriate POST Script implementation for a job on the basis of
the value of the Pegasus profile GRIDSTART_KEY, and the DAGMan profile
POST_SCRIPT_KEY in the Pegasus namepsace.
|
Modifier and Type | Method and Description |
---|---|
protected String |
Shell.generateCallToCheckExitcode(Job job,
String prefix)
Generates a call to check_exitcode function that is used
|
protected String |
Shell.generateCallToExecuteJob(Job job,
String scratchDirectory,
String submitDirectory)
Generates a call to execute_job function , that is used to launch
a job from the shell script.
|
protected String |
Shell.generateCallToExecutePostScript(Job job,
String directory)
Generates a call to execute_post_script function , that is used to launch
a job from the shell script.
|
void |
Stampede.generateCode(ADag dag,
Job job)
Method not implemented.
|
void |
Shell.generateCode(ADag dag,
Job job)
Generates the code for a single job in the input format of the workflow
executor being used.
|
void |
DAXReplicaStore.generateCode(ADag dag,
Job job)
Not implemented
|
void |
PBS.generateCode(ADag dag,
Job job)
Generates the code for a single job in the input format of the workflow
executor being used.
|
void |
Stork.generateCode(ADag dag,
Job job)
Generates the code for a single job in the Stork format.
|
void |
MonitordNotify.generateCode(ADag dag,
Job job)
Not implemented
|
void |
PMC.generateCode(ADag dag,
Job job)
Generates the code for a single job in the input format of the workflow
executor being used.
|
void |
Braindump.generateCode(ADag dag,
Job job)
Method not implemented.
|
protected void |
Stampede.generateEventsForDAXTask(PrintWriter writer,
ADag workflow,
Job job)
Generates stampede events corresponding to jobs/tasks in the DAX
|
protected void |
Stampede.generateEventsForExecutableJob(PrintWriter writer,
ADag dag,
Job job)
Generates stampede events corresponding to an executable job
|
private String |
NetloggerJobMapper.generateLogEvent(Job job,
String prefix)
Generates a log event message in the netlogger format for a job
|
protected void |
Stampede.generateTaskMapEvents(PrintWriter writer,
ADag dag,
Job job)
Generates the task.map events that link the jobs in the DAX with the
jobs in the executable workflow
|
protected String |
Shell.getExecutionDirectory(Job job)
Returns the directory in which a job should be executed.
|
String |
Abstract.getFileBaseName(Job job)
Returns the basename of the file to which the job is written to.
|
String |
Stork.getFileBaseName(Job job)
Returns the basename of the file to which the job is written to.
|
private int |
Stampede.getTaskCount(Job job)
Returns the task count for a job.
|
private int |
NetloggerJobMapper.getTaskCount(Job job)
Returns the task count for a job.
|
PrintWriter |
Abstract.getWriter(Job job)
Returns an open stream to the file that is used for writing out the
job information for the job.
|
PrintWriter |
HashedFile.getWriter(Job job)
Returns an open stream to the file that is used for writing out the
job information for the job.
|
Modifier and Type | Method and Description |
---|---|
protected Job |
SUBDAXGenerator.constructDAGJob(Job subdaxJob,
File directory,
File subdaxDirectory,
String basenamePrefix)
Constructs a job that plans and submits the partitioned workflow,
referred to by a Partition.
|
protected Job |
CondorGenerator.constructDAGJob(String name,
String directory,
String dagBasename)
Constructs a job that plans and submits the partitioned workflow,
referred to by a Partition.
|
Job |
SUBDAXGenerator.constructPegasusPlanPrescript(Job job,
PlannerOptions options,
String rootUUID,
String properties,
String log)
Constructs the pegasus plan prescript for the subdax
|
Job |
SUBDAXGenerator.generateCode(Job job)
Generates code for a job
|
Modifier and Type | Method and Description |
---|---|
void |
CondorStyle.apply(Job job)
Apply a style to a job.
|
protected void |
CondorGenerator.applyStyle(Job job,
PrintWriter writer)
Applies a submit file style to the job, according to the fact whether
the job has to be submitted directly to condor or to a remote jobmanager
via CondorG and GRAM.
|
protected Job |
SUBDAXGenerator.constructDAGJob(Job subdaxJob,
File directory,
File subdaxDirectory,
String basenamePrefix)
Constructs a job that plans and submits the partitioned workflow,
referred to by a Partition.
|
String |
SUBDAXGenerator.constructDAGManKnobs(Job job)
Constructs Any extra arguments that need to be passed to dagman, as determined
from the properties file.
|
Job |
SUBDAXGenerator.constructPegasusPlanPrescript(Job job,
PlannerOptions options,
String rootUUID,
String properties,
String log)
Constructs the pegasus plan prescript for the subdax
|
protected File |
SUBDAXGenerator.constructPlannerPrescriptWrapper(Job dagJob,
File directory,
String executable,
String arguments)
Construct a pegasus plan wrapper script that changes the directory in which
pegasus-plan is launched.
|
static void |
ClassADSGenerator.generate(PrintWriter writer,
ADag dag,
Job job)
Writes out the classads for a job to corresponding writer stream.
|
void |
CondorGenerator.generateCode(ADag dag,
Job job)
Generates the code (condor submit file) for a single job.
|
Job |
SUBDAXGenerator.generateCode(Job job)
Generates code for a job
|
protected int |
CondorGenerator.getJobPriority(Job job,
int depth)
Computes the priority for a job based on job type and depth in the workflow
|
Set<String> |
SUBDAXGenerator.getParentsTransientRC(Job job)
Returns a set containing the paths to the parent dax jobs
transient replica catalogs.
|
private String |
CondorGenerator.gridstart(PrintWriter writer,
Job job,
boolean isGlobusJob)
This function creates the stdio handling with and without gridstart.
|
protected void |
CondorGenerator.handleCondorVarForJob(Job job)
It updates/adds the condor variables that are got through the Dax with
the values specified in the properties file, pool config file or adds some
variables internally.
|
protected void |
CondorGenerator.handleEnvVarForJob(Job sinfo)
It updates/adds the environment variables that are got through the Dax with
the values specified in the properties file, pool config file or adds some
variables internally.
|
protected void |
CondorGenerator.handleGlobusRSLForJob(Job sinfo)
It updates/adds the the Globus RSL parameters got through the dax that are
in Job object.
|
CondorStyle |
CondorStyleFactory.loadInstance(Job job)
This method loads the appropriate implementing CondorStyle as specified
by the user at runtime.
|
void |
CondorGenerator.populatePeriodicReleaseAndRemoveInJob(Job job)
Populates the periodic release and remove values in the job.
|
Modifier and Type | Method and Description |
---|---|
void |
CreamCE.apply(Job job)
Applies the CREAM CE style to the job.
|
void |
SSH.apply(Job job)
Applies the CREAM CE style to the job.
|
void |
CondorG.apply(Job job)
Applies the globus style to the job.
|
void |
CondorC.apply(Job job)
Applies the CondorC style to the job.
|
void |
Condor.apply(Job job)
Applies the condor style to the job.
|
void |
CondorGlideIN.apply(Job job)
Applies the style to the job to be run in a condor glide in environment.
|
void |
GLite.apply(Job job)
Applies the gLite style to the job.
|
void |
CondorGlideinWMS.apply(Job job) |
protected void |
Abstract.applyCredentialsForJobSubmission(Job job)
Associates credentials required for job submission.
|
protected void |
Abstract.applyCredentialsForLocalExec(Job job)
Examines the credential requirements for a job and adds appropiate
transfer and environment directives for the credentials to be picked
up for the local job
|
protected void |
Abstract.applyCredentialsForRemoteExec(Job job)
Examines the credential requirements for a job and adds appropiate
transfer and environment directives for the credentials to be staged
and picked up by the job.
|
protected void |
Abstract.complainForCredential(Job job,
String key,
String site)
Complain if a particular credential key is not found for a site
|
protected String |
CreamCE.constructGridResource(Job job)
Constructs the grid_resource entry for the job.
|
protected String |
SSH.constructGridResource(Job job)
Constructs the grid_resource entry for the job.
|
protected String |
CondorC.constructGridResource(Job job)
Constructs the grid_resource entry for the job.
|
protected String |
Abstract.errorMessage(Job job,
String style,
String universe)
Constructs an error message in case of style mismatch.
|
protected String |
GLite.getCERequirementsForJob(Job job)
Constructs the value for remote CE requirements expression for the job .
|
protected String |
GLite.missingKeyError(Job job,
String key)
Constructs an error message in case of style mismatch.
|
private void |
Condor.wrapJobWithLocalPegasusLite(Job job)
Wraps the local universe jobs with a local Pegasus Lite wrapper to get
around the Condor file IO bug for local universe job
|
Modifier and Type | Method and Description |
---|---|
private void |
Kickstart.addCleanupPostScript(Job job,
List files)
Adds a /bin/rm post constituentJob to kickstart that removes the files passed.
|
private void |
PegasusLite.associateCredentials(Job job,
Collection<FileTransfer> files)
Associates credentials with the job corresponding to the files that
are being transferred.
|
boolean |
NetloggerPostScript.construct(Job job,
String key)
Constructs the postscript that has to be invoked on the submit host
after the job has executed on the remote end.
|
boolean |
NoPOSTScript.construct(Job job,
String key)
Constructs the postscript that has to be invoked on the submit host
after the job has executed on the remote end.
|
boolean |
UserPOSTScript.construct(Job job,
String key)
Constructs the postscript that has to be invoked on the submit host
after the job has executed on the remote end.
|
boolean |
PegasusExitCode.construct(Job job,
String key)
Constructs the postscript that has to be invoked on the submit host
after the job has executed on the remote end.
|
private void |
Kickstart.construct(Job job,
String key,
String value)
Constructs a condor variable in the condor profile namespace
associated with the constituentJob.
|
private void |
NoGridStart.construct(Job job,
String key,
String value)
Constructs a condor variable in the condor profile namespace
associated with the job.
|
private void |
PegasusLite.construct(Job job,
String key,
String value)
Constructs a condor variable in the condor profile namespace
associated with the job.
|
protected String |
Kickstart.constructCleanupJob(Job job,
String workerNodeTmp)
Constructs a kickstart setup constituentJob
|
protected String |
Kickstart.constructPREJob(Job job,
String headNodeURLPrefix,
String headNodeDirectory,
String workerNodeDirectory,
String slsFile)
Constructs the prejob that fetches sls file, and then invokes transfer
again.
|
protected String |
Kickstart.constructSetupJob(Job job,
String workerNodeTmp)
Constructs a kickstart setup constituentJob
|
boolean |
Kickstart.enable(Job job,
boolean isGlobusJob)
Enables a constituentJob to run on the grid by launching it through kickstart.
|
boolean |
NoGridStart.enable(Job job,
boolean isGlobusJob)
Enables a job to run on the grid by launching it directly.
|
boolean |
PegasusLite.enable(Job job,
boolean isGlobusJob)
Enables a job to run on the grid by launching it directly.
|
protected boolean |
Kickstart.enable(Job job,
boolean isGlobusJob,
boolean stat,
boolean addPostScript,
boolean partOfClusteredJob)
Enables a constituentJob to run on the grid by launching it through kickstart.
|
private void |
PegasusLite.enableForWorkerNodeExecution(Job job,
boolean isGlobusJob)
Enables jobs for worker node execution.
|
protected String |
Kickstart.getDirectory(Job job)
Returns the directory in which the job should run.
|
protected String |
NoGridStart.getDirectory(Job job)
Returns the directory in which the job should run.
|
private String |
Kickstart.getDirectoryKey(Job job)
Returns the exectionSiteDirectory that is associated with the constituentJob to specify
the exectionSiteDirectory in which the constituentJob needs to run
|
private String |
NoGridStart.getDirectoryKey(Job job)
Returns the directory that is associated with the job to specify
the directory in which the job needs to run
|
private String |
PegasusLite.getDirectoryKey(Job job)
Returns the directory that is associated with the job to specify
the directory in which the job needs to run
|
String |
Kickstart.getWorkerNodeDirectory(Job job)
Returns the exectionSiteDirectory in which the constituentJob executes on the worker node.
|
String |
NoGridStart.getWorkerNodeDirectory(Job job)
Returns the directory in which the job executes on the worker node.
|
String |
PegasusLite.getWorkerNodeDirectory(Job job)
Returns the directory in which the job executes on the worker node.
|
protected String |
NoGridStart.handleTransferOfExecutable(Job job)
It changes the paths to the executable depending on whether we want to
transfer the executable or not.
|
protected String |
Kickstart.handleTransferOfExecutable(Job job,
String path)
It changes the paths to the executable depending on whether we want to
transfer the executable or not.
|
private boolean |
NoGridStart.removeDirectoryKey(Job job)
Returns a boolean indicating whether to remove remote directory
information or not from the job.
|
private boolean |
PegasusLite.removeDirectoryKey(Job job)
Returns a boolean indicating whether to remove remote directory
information or not from the job.
|
protected boolean |
Kickstart.requiresToSetDirectory(Job job)
Returns a boolean indicating whether we need to set the directory for
the job or not.
|
protected boolean |
NoGridStart.requiresToSetDirectory(Job job)
Returns a boolean indicating whether we need to set the directory for
the job or not.
|
private boolean |
Kickstart.useInvoke(Job job,
String executable,
StringBuffer args)
Triggers the creation of the kickstart input file, that contains the
the remote executable and the arguments with which it has to be invoked.
|
protected File |
PegasusLite.wrapJobWithPegasusLite(Job job,
boolean isGlobusJob)
Generates a seqexec input file for the job.
|
Modifier and Type | Field and Description |
---|---|
private Job |
DAXParser2.mCurrentJobSubInfo
Holds information regarding the current job being parsed.
|
Modifier and Type | Field and Description |
---|---|
private Map<String,Job> |
DAX2CDAG.mFileCreationMap
Map to track a LFN with the job that creates the file corresponding to the
LFN.
|
Modifier and Type | Method and Description |
---|---|
void |
ExampleDAXCallback.cbJob(Job job)
Callback for the job from section 2 jobs.
|
void |
DAX2Metadata.cbJob(Job job)
Callback for the job from section 2 jobs.
|
void |
DAX2CDAG.cbJob(Job job)
Callback for the job from section 2 jobs.
|
void |
DAX2LabelGraph.cbJob(Job job)
This constructs a graph node for the job and ends up storing it in the
internal map.
|
void |
DAX2Graph.cbJob(Job job)
This constructs a graph node for the job and ends up storing it in the
internal map.
|
void |
Callback.cbJob(Job job)
Callback for the job from section 4: Job's, DAX's or Dag's that list
a JOB or DAX or DAG .
|
void |
DAX2NewGraph.cbJob(Job job)
This constructs a graph node for the job and ends up storing it in the
internal map.
|
protected String |
DAXParser3.constructJobID(Job j)
Returns the id for a job
|
private TransformationCatalogEntry |
DAXParser2.constructTCEntryFromJobHints(Job job)
Constructs a TC entry object from the contents of a job.
|
private void |
DAXParser2.handleJobTagStart(Job job)
Invoked when the starting of the job element is got.
|
Modifier and Type | Method and Description |
---|---|
protected Job |
PDAX2MDAG.constructDAGJob(Partition partition,
File directory,
String dax)
Constructs a job that plans and submits the partitioned workflow,
referred to by a Partition.
|
protected Job |
PDAX2MDAG.getJob(String id)
Returns the job that has been constructed for a particular partition.
|
Modifier and Type | Method and Description |
---|---|
protected String |
PDAX2MDAG.getBasenamePrefix(Job job)
Returns the basename prefix of a dagman (usually) related file for a
a job that submits nested dagman.
|
protected String |
PDAX2MDAG.getCacheFilePath(Job job)
Returns the full path to a cache file that corresponds for one partition.
|
protected void |
PDAX2MDAG.setPrescript(Job job,
String daxURL,
String log)
Sets the prescript that ends up calling to the default wrapper that
introduces retry into Pegasus for a particular job.
|
protected void |
PDAX2MDAG.setPrescript(Job job,
String daxURL,
String log,
String namespace,
String name,
String version)
Sets the prescript that ends up calling to the default wrapper that
introduces retry into Pegasus for a particular job.
|
Modifier and Type | Field and Description |
---|---|
private List<Job> |
DataReuseEngine.mAllDeletedJobs
List of all deleted jobs during workflow reduction.
|
private List<Job> |
TransferEngine.mDeletedJobs
Holds all the jobs deleted by the reduction algorithm.
|
Modifier and Type | Method and Description |
---|---|
Job |
ReplicaCatalogBridge.makeRCRegNode(String regJobName,
Job job,
Collection files)
It constructs the Job object for the registration node, which
registers the materialized files on the output pool in the RLS.
|
Job |
RemoveDirectory.makeRemoveDirJob(String site,
String jobName)
It creates a remove directory job that creates a directory on the remote pool
using the perl executable that Gaurang wrote.
|
Job |
RemoveDirectory.makeRemoveDirJob(String site,
String jobName,
List<String> files)
It creates a remove directory job that creates a directory on the remote pool
using the perl executable that Gaurang wrote.
|
protected Job |
DeployWorkerPackage.makeUntarJob(String site,
String jobName,
String wpBasename)
It creates a untar job , that untars the worker package that is staged
by the setup transfer job.
|
Modifier and Type | Method and Description |
---|---|
List<Job> |
DataReuseEngine.getDeletedJobs()
This returns all the jobs deleted from the workflow after the reduction
algorithm has run.
|
List<Job> |
DataReuseEngine.getDeletedLeafJobs()
This returns all the deleted jobs that happen to be leaf nodes.
|
Modifier and Type | Method and Description |
---|---|
protected void |
Engine.complainForHeadNodeURLPrefix(String refiner,
String site,
FileServerType.OPERATION operation,
Job job)
Complains for head node url prefix not specified
|
private void |
TransferEngine.complainForScratchFileServer(Job job,
FileServerType.OPERATION operation,
String site)
Complains for a missing head node file server on a site for a job
|
private FileTransfer |
TransferEngine.constructFileTX(PegasusFile pf,
Job job,
String destSiteHandle,
String path,
boolean localTransfer)
Constructs the FileTransfer object on the basis of the transiency
information.
|
private String |
RemoveDirectory.getAssociatedCreateDirSite(Job job)
Returns the associated site that job is dependant on.
|
private Vector |
TransferEngine.getDeletedFileTX(String pool,
Job job)
This gets the file transfer objects corresponding to the location of files
found in the replica mechanism, and transfers it to the output pool asked
by the user.
|
private void |
TransferEngine.getFilesFromRC(Job job,
Collection searchFiles)
It looks up the RCEngine Hashtable to lookup the locations for the
files and add nodes to transfer them.
|
private Vector |
TransferEngine.getFileTX(String destPool,
Job job,
boolean localTransfer)
This gets the Vector of FileTransfer objects for the files which have to
be transferred to an one destination pool.
|
private Collection<FileTransfer>[] |
TransferEngine.getInterpoolFileTX(Job job,
Collection<GraphNode> parents)
This gets the Vector of FileTransfer objects for all the files which have
to be transferred to the destination pool in case of Interpool transfers.
|
String |
InterPoolEngine.getStagingSite(Job job)
Returns the staging site to be used for a job.
|
String |
TransferEngine.getStagingSite(Job job)
Returns the staging site to be used for a job.
|
private String |
TransferEngine.getURLOnSharedScratch(SiteCatalogEntry entry,
Job job,
FileServerType.OPERATION operation,
String lfn)
Returns a URL on the shared scratch of the staging site
|
private void |
InterPoolEngine.handleDependantExecutables(Job job)
Handles the dependant executables that need to be staged.
|
private boolean |
InterPoolEngine.incorporateHint(Job job,
String key)
It incorporates a hint in the namespace to the job.
|
private boolean |
InterPoolEngine.incorporateProfiles(Job job)
Incorporates the profiles from the various sources into the job.
|
protected void |
InterPoolEngine.logRefinerAction(Job job)
Logs the action taken by the refiner on a job as a XML fragment in
the XML Producer.
|
private void |
TransferEngine.logRemoval(Job job,
PegasusFile file,
String prefix,
boolean removed)
Helped method for logging removal message.
|
Job |
ReplicaCatalogBridge.makeRCRegNode(String regJobName,
Job job,
Collection files)
It constructs the Job object for the registration node, which
registers the materialized files on the output pool in the RLS.
|
private void |
TransferEngine.processParents(Job job,
Collection<GraphNode> parents)
It processes a nodes parents and determines if nodes are to be added
or not.
|
private TransformationCatalogEntry |
InterPoolEngine.selectTCEntry(List entries,
Job job,
String selector)
Calls out to the transformation selector to select an entry from a list
of valid transformation catalog entries.
|
private void |
TransferEngine.trackInCaches(Job job)
Tracks the files created by a job in the both the planner and workflow cache
The planner cache stores the put URL's and the GET URL is stored in the
workflow cache.
|
Constructor and Description |
---|
TransferEngine(ADag reducedDag,
PegasusBag bag,
List<Job> deletedJobs,
List<Job> deletedLeafJobs)
Overloaded constructor.
|
TransferEngine(ADag reducedDag,
PegasusBag bag,
List<Job> deletedJobs,
List<Job> deletedLeafJobs)
Overloaded constructor.
|
Modifier and Type | Method and Description |
---|---|
Job |
CleanupImplementation.createCleanupJob(String id,
List files,
Job job)
Creates a cleanup job that removes the files from remote working directory.
|
Job |
RM.createCleanupJob(String id,
List files,
Job job)
Creates a cleanup job that removes the files from remote working directory.
|
Job |
Cleanup.createCleanupJob(String id,
List files,
Job job)
Creates a cleanup job that removes the files from remote working directory.
|
Modifier and Type | Method and Description |
---|---|
Job |
CleanupImplementation.createCleanupJob(String id,
List files,
Job job)
Creates a cleanup job that removes the files from remote working directory.
|
Job |
RM.createCleanupJob(String id,
List files,
Job job)
Creates a cleanup job that removes the files from remote working directory.
|
Job |
Cleanup.createCleanupJob(String id,
List files,
Job job)
Creates a cleanup job that removes the files from remote working directory.
|
protected String |
InPlace.generateCleanupID(Job job)
Returns the identifier that is to be assigned to cleanup job.
|
protected String |
InPlace.getSiteForCleanup(Job job)
Returns site to be used for the cleanup algorithm.
|
Modifier and Type | Method and Description |
---|---|
Job |
DefaultImplementation.makeCreateDirJob(String site,
String name,
String directoryURL)
It creates a make directoryURL job that creates a directoryURL on the remote pool
using the perl executable that Gaurang wrote.
|
Job |
Implementation.makeCreateDirJob(String site,
String name,
String directoryURL)
It creates a make directory job that creates a directory on the remote pool
using the perl executable that Gaurang wrote.
|
Job |
HourGlass.makeDummyConcatJob(ADag dag)
It creates a dummy concat job that is run at the local submit host.
|
Modifier and Type | Method and Description |
---|---|
boolean |
Minimal.addDependency(Job job) |
private void |
HourGlass.construct(Job job,
String key,
String value)
Constructs a condor variable in the condor profile namespace
associated with the job.
|
private String |
Minimal.getAssociatedCreateDirSite(Job job)
Returns the associated site that job is dependant on.
|
private void |
HourGlass.introduceRootDependencies(ADag dag,
Job newRoot)
It traverses through the root jobs of the dag and introduces a new super
root node to it.
|
Modifier and Type | Method and Description |
---|---|
private void |
Group.insert(Job job)
Inserts the job into the group map.
|
void |
RoundRobin.mapJob(Job job,
List sites)
Maps a job in the workflow to an execution site.
|
void |
Random.mapJob(Job job,
List sites)
Maps a job in the workflow to an execution site.
|
abstract void |
AbstractPerJob.mapJob(Job job,
List sites)
Maps a job in the workflow to the various grid sites.
|
void |
NonJavaCallout.mapJob(Job job,
List sites)
Calls out to the external site selector.
|
private boolean |
NonJavaCallout.parseStdOut(Job job,
String s)
Extracts the chosen site from the site selector's answer.
|
private File |
NonJavaCallout.prepareInputFile(Job job,
List pools)
Writes job knowledge into the temporary file passed to the external
site selector.
|
Modifier and Type | Method and Description |
---|---|
protected float |
Algorithm.calculateAverageComputeTime(Job job)
Returns the average compute time in seconds for a job.
|
protected int |
Algorithm.getExpectedRuntime(Job job,
TransformationCatalogEntry entry)
Return expected runtime.
|
protected double |
Algorithm.getExpectedRuntimeFromAC(Job job,
TransformationCatalogEntry entry)
Return expected runtime from the AC only if the process catalog is
initialized.
|
String |
Algorithm.mapJob2ExecPool(Job job,
List pools)
The call out to the site selector to determine on what pool the job
should be scheduled.
|
Modifier and Type | Method and Description |
---|---|
Job |
Implementation.createSetXBitJob(Job computeJob,
Collection<FileTransfer> execFiles,
int transferClass,
int xbitIndex)
Adds the dirmanager job to the workflow, that do a chmod on the files
being staged.
|
Modifier and Type | Method and Description |
---|---|
void |
Refiner.addInterSiteTXNodes(Job job,
Collection files,
boolean localTransfer)
Adds the inter pool transfer nodes that are required for transferring
the output files of the parents to the jobs execution site.
|
void |
Refiner.addJob(Job job)
Add a new job to the workflow being refined.
|
boolean |
Implementation.addSetXBitJobs(Job computeJob,
String txJobName,
Collection execFiles,
int transferClass,
int xbitIndex)
Adds the dirmanager job to the workflow, that do a chmod on the files
being staged.
|
void |
AbstractRefiner.addStageInXFERNodes(Job job,
Collection<FileTransfer> files)
Default behaviour to preserve backward compatibility when the stage in
and symbolic link jobs were not separated.
|
void |
Refiner.addStageInXFERNodes(Job job,
Collection<FileTransfer> files,
Collection<FileTransfer> symLinkFiles)
Adds the stage in transfer nodes which transfer the input files for a job,
from the location returned from the replica catalog to the job's execution
pool.
|
void |
AbstractRefiner.addStageInXFERNodes(Job job,
Collection<FileTransfer> files,
Collection<FileTransfer> symlinkFiles)
Default behaviour to preserve backward compatibility when the stage in
and symbolic link jobs were not separated.
|
void |
Refiner.addStageOutXFERNodes(Job job,
Collection files,
ReplicaCatalogBridge rcb,
boolean localTransfer)
Adds the stageout transfer nodes, that stage data to an output site
specified by the user.
|
void |
Refiner.addStageOutXFERNodes(Job job,
Collection files,
ReplicaCatalogBridge rcb,
boolean localTransfer,
boolean deletedLeaf)
Adds the stageout transfer nodes, that stage data to an output site
specified by the user.
|
Job |
Implementation.createSetXBitJob(Job computeJob,
Collection<FileTransfer> execFiles,
int transferClass,
int xbitIndex)
Adds the dirmanager job to the workflow, that do a chmod on the files
being staged.
|
TransferJob |
Implementation.createTransferJob(Job job,
String site,
Collection files,
Collection execFiles,
String txJobName,
int jobClass)
This constructs the Job object for the transfer node.
|
Collection<FileTransfer> |
SLS.determineSLSInputTransfers(Job job,
String fileName,
FileServer stagingSiteServer,
String stagingSiteDirectory,
String workerNodeDirectory)
Generates a second level staging file of the input files to the worker node
directory.
|
Collection<FileTransfer> |
SLS.determineSLSOutputTransfers(Job job,
String fileName,
FileServer stagingSiteServer,
String stagingSiteDirectory,
String workerNodeDirectory)
Generates a second level staging file of the input files to the worker node
directory.
|
String |
SLS.getSLSInputLFN(Job job)
Returns the LFN of sls input file.
|
String |
SLS.getSLSOutputLFN(Job job)
Returns the LFN of sls output file.
|
String |
SLS.invocationString(Job job,
File slsFile)
Constructs a command line invocation for a job, with a given sls file.
|
boolean |
SLS.modifyJobForWorkerNodeExecution(Job job,
String stagingSiteURLPrefix,
String stagingSitedirectory,
String workerNodeDirectory)
Modifies a compute job for second level staging.
|
boolean |
SLS.needsSLSInputTransfers(Job job)
Returns a boolean indicating whether it will an input file for a job
to do the transfers.
|
boolean |
SLS.needsSLSOutputTransfers(Job job)
Returns a boolean indicating whether it will an output file for a job
to do the transfers.
|
Modifier and Type | Method and Description |
---|---|
Job |
Abstract.createNoOPJob(String name)
It creates a NoOP job that runs on the submit host.
|
protected Job |
Abstract.createSetXBitJob(Collection<FileTransfer> files,
String name,
String site)
Creates a dirmanager job, that does a chmod on the file being staged.
|
protected Job |
Abstract.createSetXBitJob(FileTransfer file,
String name)
Creates a dirmanager job, that does a chmod on the file being staged.
|
Job |
Abstract.createSetXBitJob(Job computeJob,
Collection<FileTransfer> execFiles,
int transferClass,
int xbitIndex)
Adds the dirmanager job to the workflow, that do a chmod on the files
being staged.
|
Modifier and Type | Method and Description |
---|---|
protected boolean |
Abstract.addSetXBitJobs(Job computeJob,
Job txJob,
Collection execFiles)
Adds the dirmanager to the workflow, that do a chmod on the files
being staged.
|
boolean |
Abstract.addSetXBitJobs(Job computeJob,
String txJobName,
Collection execFiles,
int transferClass)
Adds the dirmanager job to the workflow, that do a chmod on the files
being staged.
|
boolean |
Abstract.addSetXBitJobs(Job computeJob,
String txJobName,
Collection execFiles,
int transferClass,
int xbitIndex)
Adds the dirmanager job to the workflow, that do a chmod on the files
being staged.
|
protected void |
Abstract.construct(Job job,
String key,
String value)
Constructs a condor variable in the condor profile namespace
associated with the job.
|
Job |
Abstract.createSetXBitJob(Job computeJob,
Collection<FileTransfer> execFiles,
int transferClass,
int xbitIndex)
Adds the dirmanager job to the workflow, that do a chmod on the files
being staged.
|
TransferJob |
Stork.createTransferJob(Job job,
FileTransfer file,
Collection execFiles,
String txJobName,
int jobClass)
Constructs a general transfer job that handles single transfers per
transfer job.
|
TransferJob |
AbstractSingleFTPerXFERJob.createTransferJob(Job job,
String site,
Collection files,
Collection execFiles,
String txJobName,
int jobClass)
Constructs a general transfer job that handles single transfers per
transfer job.
|
TransferJob |
AbstractMultipleFTPerXFERJob.createTransferJob(Job job,
String site,
Collection files,
Collection execFiles,
String txJobName,
int jobClass)
Constructs a general transfer job that handles multiple transfers per
transfer job.
|
TransferJob |
AbstractSingleFTPerXFERJob.createTransferJob(Job job,
String site,
FileTransfer file,
Collection execFiles,
String txJobName,
int jobClass)
Constructs a general transfer job that handles single transfers per
transfer job.
|
Modifier and Type | Field and Description |
---|---|
private Map<String,Job> |
Cluster.mSyncJobMap
Maps the site name to the current synch job
|
Modifier and Type | Method and Description |
---|---|
protected Job |
Empty.createRegistrationJob(String regJobName,
Job job,
Collection files,
ReplicaCatalogBridge rcb)
Creates the registration jobs, which registers the materialized files on
the output site in the Replica Catalog.
|
protected Job |
Basic.createRegistrationJob(String regJobName,
Job job,
Collection files,
ReplicaCatalogBridge rcb)
Creates the registration jobs, which registers the materialized files on
the output site in the Replica Catalog.
|
private Job |
Cluster.createSyncJobBetweenLevels(String name)
It creates a NoOP synch job that runs on the submit host.
|
Job |
Cluster.getSyncJob(String site)
Returns the current synch job for a site.
|
Modifier and Type | Method and Description |
---|---|
void |
Empty.addInterSiteTXNodes(Job job,
Collection files,
boolean localTransfer)
Adds the inter pool transfer nodes that are required for transferring
the output files of the parents to the jobs execution site.
|
void |
Basic.addInterSiteTXNodes(Job job,
Collection files,
boolean localTransfer)
Adds the inter pool transfer nodes that are required for transferring
the output files of the parents to the jobs execution site.
|
void |
Empty.addJob(Job job)
Add a new job to the workflow being refined.
|
void |
Basic.addJob(Job job)
Add a new job to the workflow being refined.
|
void |
BalancedCluster.addStageInXFERNodes(Job job,
boolean localTransfer,
Collection files,
int type,
Map<String,BalancedCluster.PoolTransfer> stageInMap,
BalancedCluster.BundleValue bundleValue,
Implementation implementation)
Adds the stage in transfer nodes which transfer the input files for a job,
from the location returned from the replica catalog to the job's execution
pool.
|
void |
Bundle.addStageInXFERNodes(Job job,
boolean localTransfer,
Collection files,
int type,
Map<String,Bundle.PoolTransfer> stageInMap,
Bundle.BundleValue bundleValue,
Implementation implementation)
Adds the stage in transfer nodes which transfer the input files for a job,
from the location returned from the replica catalog to the job's execution
pool.
|
void |
Cluster.addStageInXFERNodes(Job job,
boolean localTransfer,
Collection files,
int jobType,
Map<String,Bundle.PoolTransfer> stageInMap,
Bundle.BundleValue cValue,
Implementation implementation)
Adds the stage in transfer nodes which transfer the input files for a job,
from the location returned from the replica catalog to the job's execution
pool.
|
void |
Empty.addStageInXFERNodes(Job job,
Collection<FileTransfer> files,
Collection<FileTransfer> symlinkFiles)
Adds the stage in transfer nodes which transfer the input files for a job,
from the location returned from the replica catalog to the job's execution
pool.
|
void |
Bundle.addStageInXFERNodes(Job job,
Collection<FileTransfer> files,
Collection<FileTransfer> symlinkFiles)
Adds the stage in transfer nodes which transfer the input files for a job,
from the location returned from the replica catalog to the job's execution
pool.
|
void |
Basic.addStageInXFERNodes(Job job,
Collection<FileTransfer> files,
Collection<FileTransfer> symlinkFiles)
Adds the stage in transfer nodes which transfer the input files for a job,
from the location returned from the replica catalog to the job's execution
pool.
|
void |
BalancedCluster.addStageInXFERNodes(Job job,
Collection<FileTransfer> files,
Collection<FileTransfer> symlinkFiles)
Adds the stage in transfer nodes which transfer the input files for a job,
from the location returned from the replica catalog to the job's execution
pool.
|
void |
Cluster.addStageInXFERNodes(Job job,
Collection<FileTransfer> files,
Collection<FileTransfer> symlinkFiles)
Adds the stage in transfer nodes which transfer the input files for a job,
from the location returned from the replica catalog to the job's execution
pool.
|
void |
Empty.addStageInXFERNodes(Job job,
Collection<FileTransfer> files,
String prefix,
Implementation implementation)
Adds the stage in transfer nodes which transfer the input files for a job,
from the location returned from the replica catalog to the job's execution
pool.
|
void |
Basic.addStageInXFERNodes(Job job,
Collection<FileTransfer> files,
String prefix,
Implementation implementation)
Adds the stage in transfer nodes which transfer the input files for a job,
from the location returned from the replica catalog to the job's execution
pool.
|
void |
Empty.addStageOutXFERNodes(Job job,
Collection files,
ReplicaCatalogBridge rcb,
boolean localTransfer)
Adds the stageout transfer nodes, that stage data to an output site
specified by the user.
|
void |
Basic.addStageOutXFERNodes(Job job,
Collection files,
ReplicaCatalogBridge rcb,
boolean localTransfer)
Adds the stageout transfer nodes, that stage data to an output site
specified by the user.
|
void |
Empty.addStageOutXFERNodes(Job job,
Collection files,
ReplicaCatalogBridge rcb,
boolean localTransfer,
boolean deletedLeaf)
Adds the stageout transfer nodes, that stage data to an output site
specified by the user.
|
void |
Bundle.addStageOutXFERNodes(Job job,
Collection files,
ReplicaCatalogBridge rcb,
boolean localTransfer,
boolean deletedLeaf)
Adds the stageout transfer nodes, that stage data to an output site
specified by the user.
|
void |
Basic.addStageOutXFERNodes(Job job,
Collection files,
ReplicaCatalogBridge rcb,
boolean localTransfer,
boolean deletedLeaf)
Adds the stageout transfer nodes, that stage data to an output site
specified by the user.
|
void |
BalancedCluster.addStageOutXFERNodes(Job job,
Collection files,
ReplicaCatalogBridge rcb,
boolean localTransfer,
boolean deletedLeaf)
Adds the stageout transfer nodes, that stage data to an output site
specified by the user.
|
protected void |
Cluster.constructCondorKey(Job job,
String key,
String value)
Constructs a condor variable in the condor profile namespace
associated with the job.
|
protected Job |
Empty.createRegistrationJob(String regJobName,
Job job,
Collection files,
ReplicaCatalogBridge rcb)
Creates the registration jobs, which registers the materialized files on
the output site in the Replica Catalog.
|
protected Job |
Basic.createRegistrationJob(String regJobName,
Job job,
Collection files,
ReplicaCatalogBridge rcb)
Creates the registration jobs, which registers the materialized files on
the output site in the Replica Catalog.
|
int |
Bundle.BundleValue.determine(Implementation implementation,
Job job)
Determines the bundle factor for a particular site on the basis of the
stage in bundle value associcated with the underlying transfer
transformation in the transformation catalog.
|
int |
BalancedCluster.BundleValue.determine(Implementation implementation,
Job job)
Determines the bundle factor for a particular site on the basis of the
stage in bundle value associcated with the underlying transfer
transformation in the transformation catalog.
|
protected String |
Bundle.getComputeJobBundleValue(Job job)
Returns the bundle value associated with a compute job as a String.
|
protected String |
BalancedCluster.getComputeJobBundleValue(Job job)
Returns the bundle value associated with a compute job as a String.
|
protected String |
Cluster.getComputeJobBundleValue(Job job)
Returns the bundle value associated with a compute job as a String.
|
protected int |
Basic.getJobPriority(Job job)
Returns the priority associated with a job based on the condor profile key
priority .
|
protected void |
Basic.logRefinerAction(Job computeJob,
Job txJob,
Collection files,
String type)
Records the refiner action into the Provenace Store as a XML fragment.
|
Modifier and Type | Method and Description |
---|---|
protected Map<String,Bundle.PoolTransfer> |
Cluster.resetStageInMap(Map<String,Bundle.PoolTransfer> stageInMap,
Implementation implementation,
Map<String,Job> transientSynchJobMap,
int jobType,
boolean createChildSyncJob,
boolean localTransfer)
Resets the stage in map and adds the stage in jobs for each site per level.
|
Modifier and Type | Method and Description |
---|---|
protected void |
Transfer.complainForHeadNodeURLPrefix(Job job,
String site)
Complains for head node url prefix not specified
|
Collection<FileTransfer> |
Transfer.determineSLSInputTransfers(Job job,
String fileName,
FileServer stagingSiteServer,
String stagingSiteDirectory,
String workerNodeDirectory)
Generates a second level staging file of the input files to the worker
node directory.
|
Collection<FileTransfer> |
Condor.determineSLSInputTransfers(Job job,
String fileName,
FileServer stagingSiteServer,
String stagingSiteDirectory,
String workerNodeDirectory)
Generates a second level staging file of the input files to the worker
node directory.
|
Collection<FileTransfer> |
Transfer.determineSLSOutputTransfers(Job job,
String fileName,
FileServer stagingSiteServer,
String stagingSiteDirectory,
String workerNodeDirectory)
Generates a second level staging file of the input files to the worker
node directory.
|
Collection<FileTransfer> |
Condor.determineSLSOutputTransfers(Job job,
String fileName,
FileServer stagingSiteServer,
String stagingSiteDirectory,
String workerNodeDirectory)
Generates a second level staging file of the input files to the worker
node directory.
|
String |
Transfer.getSLSInputLFN(Job job)
Returns the LFN of sls input file.
|
String |
Condor.getSLSInputLFN(Job job)
Returns the LFN of sls input file.
|
String |
Transfer.getSLSOutputLFN(Job job)
Returns the LFN of sls output file.
|
String |
Condor.getSLSOutputLFN(Job job)
Returns the LFN of sls output file.
|
String |
Transfer.invocationString(Job job,
File slsFile)
Constructs a command line invocation for a job, with a given sls file.
|
String |
Condor.invocationString(Job job,
File slsFile)
Constructs a command line invocation for a job, with a given sls file.
|
boolean |
Condor.modifyJobForFirstLevelStaging(Job job,
String submitDir,
String slsInputLFN,
String slsOutputLFN)
Modifies a job for the first level staging to headnode.This is to add
any files that needs to be staged to the head node for a job specific
to the SLS implementation.
|
boolean |
Transfer.modifyJobForWorkerNodeExecution(Job job,
String stagingSiteURLPrefix,
String stagingSitedirectory,
String workerNodeDirectory)
Modifies a compute job for second level staging.
|
boolean |
Condor.modifyJobForWorkerNodeExecution(Job job,
String stagingSiteURLPrefix,
String stagingSiteDirectory,
String workerNodeDirectory)
Modifies a compute job for second level staging.
|
boolean |
Transfer.needsSLSInputTransfers(Job job)
Returns a boolean indicating whether it will an input file for a job
to do the transfers.
|
boolean |
Condor.needsSLSInputTransfers(Job job)
Returns a boolean indicating whether it will an input file for a job
to do the transfers.
|
boolean |
Transfer.needsSLSOutputTransfers(Job job)
Returns a boolean indicating whether it will an output file for a job
to do the transfers.
|
boolean |
Condor.needsSLSOutputTransfers(Job job)
Returns a boolean indicating whether it will an output file for a job
to do the transfers.
|