Intel(R) Threading Building Blocks Doxygen Documentation  version 4.2.3
tbb::internal::generic_scheduler Class Referenceabstract

Work stealing task scheduler. More...

#include <scheduler.h>

Inheritance diagram for tbb::internal::generic_scheduler:
Collaboration diagram for tbb::internal::generic_scheduler:

Public Member Functions

bool is_task_pool_published () const
 
bool is_local_task_pool_quiescent () const
 
bool is_quiescent_local_task_pool_empty () const
 
bool is_quiescent_local_task_pool_reset () const
 
void attach_mailbox (affinity_id id)
 
void init_stack_info ()
 Sets up the data necessary for the stealing limiting heuristics. More...
 
bool can_steal ()
 Returns true if stealing is allowed. More...
 
void publish_task_pool ()
 Used by workers to enter the task pool. More...
 
void leave_task_pool ()
 Leave the task pool. More...
 
void reset_task_pool_and_leave ()
 Resets head and tail indices to 0, and leaves task pool. More...
 
task ** lock_task_pool (arena_slot *victim_arena_slot) const
 Locks victim's task pool, and returns pointer to it. The pointer can be NULL. More...
 
void unlock_task_pool (arena_slot *victim_arena_slot, task **victim_task_pool) const
 Unlocks victim's task pool. More...
 
void acquire_task_pool () const
 Locks the local task pool. More...
 
void release_task_pool () const
 Unlocks the local task pool. More...
 
taskprepare_for_spawning (task *t)
 Checks if t is affinitized to another thread, and if so, bundles it as proxy. More...
 
void commit_spawned_tasks (size_t new_tail)
 Makes newly spawned tasks visible to thieves. More...
 
void commit_relocated_tasks (size_t new_tail)
 Makes relocated tasks visible to thieves and releases the local task pool. More...
 
taskget_task (__TBB_ISOLATION_EXPR(isolation_tag isolation))
 Get a task from the local pool. More...
 
taskget_task (size_t T)
 Get a task from the local pool at specified location T. More...
 
taskget_mailbox_task (__TBB_ISOLATION_EXPR(isolation_tag isolation))
 Attempt to get a task from the mailbox. More...
 
tasksteal_task (__TBB_ISOLATION_EXPR(isolation_tag isolation))
 Attempts to steal a task from a randomly chosen thread/scheduler. More...
 
tasksteal_task_from (__TBB_ISOLATION_ARG(arena_slot &victim_arena_slot, isolation_tag isolation))
 Steal task from another scheduler's ready pool. More...
 
size_t prepare_task_pool (size_t n)
 Makes sure that the task pool can accommodate at least n more elements. More...
 
bool cleanup_master (bool blocking_terminate)
 Perform necessary cleanup when a master thread stops using TBB. More...
 
void assert_task_pool_valid () const
 
void attach_arena (arena *, size_t index, bool is_master)
 
void nested_arena_entry (arena *, size_t)
 
void nested_arena_exit ()
 
void wait_until_empty ()
 
void spawn (task &first, task *&next) __TBB_override
 For internal use only. More...
 
void spawn_root_and_wait (task &first, task *&next) __TBB_override
 For internal use only. More...
 
void enqueue (task &, void *reserved) __TBB_override
 For internal use only. More...
 
void local_spawn (task *first, task *&next)
 
void local_spawn_root_and_wait (task *first, task *&next)
 
virtual void local_wait_for_all (task &parent, task *child)=0
 
void free_scheduler ()
 Destroy and deallocate this scheduler object. More...
 
taskallocate_task (size_t number_of_bytes, __TBB_CONTEXT_ARG(task *parent, task_group_context *context))
 Allocate task object, either from the heap or a free list. More...
 
template<free_task_hint h>
void free_task (task &t)
 Put task on free list. More...
 
void deallocate_task (task &t)
 Return task object to the memory allocator. More...
 
bool is_worker () const
 True if running on a worker thread, false otherwise. More...
 
bool outermost_level () const
 True if the scheduler is on the outermost dispatch level. More...
 
bool master_outermost_level () const
 True if the scheduler is on the outermost dispatch level in a master thread. More...
 
bool worker_outermost_level () const
 True if the scheduler is on the outermost dispatch level in a worker thread. More...
 
unsigned max_threads_in_arena ()
 Returns the concurrency limit of the current arena. More...
 
virtual taskreceive_or_steal_task (__TBB_ISOLATION_ARG(__TBB_atomic reference_count &completion_ref_count, isolation_tag isolation))=0
 Try getting a task from other threads (via mailbox, stealing, FIFO queue, orphans adoption). More...
 
void free_nonlocal_small_task (task &t)
 Free a small task t that that was allocated by a different scheduler. More...
 
- Public Member Functions inherited from tbb::internal::scheduler
virtual void wait_for_all (task &parent, task *child)=0
 For internal use only. More...
 
virtual ~scheduler ()=0
 Pure virtual destructor;. More...
 

Static Public Member Functions

static bool is_version_3_task (task &t)
 
static bool is_proxy (const task &t)
 True if t is a task_proxy. More...
 
static generic_schedulercreate_master (arena *a)
 Initialize a scheduler for a master thread. More...
 
static generic_schedulercreate_worker (market &m, size_t index)
 Initialize a scheduler for a worker thread. More...
 
static void cleanup_worker (void *arg, bool worker)
 Perform necessary cleanup when a worker thread finishes. More...
 
static taskplugged_return_list ()
 Special value used to mark my_return_list as not taking any more entries. More...
 

Public Attributes

uintptr_t my_stealing_threshold
 Position in the call stack specifying its maximal filling when stealing is still allowed. More...
 
marketmy_market
 The market I am in. More...
 
FastRandom my_random
 Random number generator used for picking a random victim from which to steal. More...
 
taskmy_free_list
 Free list of small tasks that can be reused. More...
 
taskmy_dummy_task
 Fake root task created by slave threads. More...
 
long my_ref_count
 Reference count for scheduler. More...
 
bool my_auto_initialized
 True if *this was created by automatic TBB initialization. More...
 
__TBB_atomic intptr_t my_small_task_count
 Number of small tasks that have been allocated by this scheduler. More...
 
taskmy_return_list
 List of small tasks that have been returned to this scheduler by other schedulers. More...
 
- Public Attributes inherited from tbb::internal::intrusive_list_node
intrusive_list_nodemy_prev_node
 
intrusive_list_nodemy_next_node
 
- Public Attributes inherited from tbb::internal::scheduler_state
size_t my_arena_index
 Index of the arena slot the scheduler occupies now, or occupied last time. More...
 
arena_slotmy_arena_slot
 Pointer to the slot in the arena we own at the moment. More...
 
arenamy_arena
 The arena that I own (if master) or am servicing at the moment (if worker) More...
 
taskmy_innermost_running_task
 Innermost task whose task::execute() is running. A dummy task on the outermost level. More...
 
mail_inbox my_inbox
 
affinity_id my_affinity_id
 The mailbox id assigned to this scheduler. More...
 
scheduler_properties my_properties
 

Static Public Attributes

static const size_t quick_task_size = 256-task_prefix_reservation_size
 If sizeof(task) is <=quick_task_size, it is handled on a free list instead of malloc'd. More...
 
static const size_t null_arena_index = ~size_t(0)
 
static const size_t min_task_pool_size = 64
 

Protected Member Functions

 generic_scheduler (market &)
 

Friends

template<typename SchedulerTraits >
class custom_scheduler
 

Detailed Description

Work stealing task scheduler.

None of the fields here are ever read or written by threads other than the thread that creates the instance.

Class generic_scheduler is an abstract base class that contains most of the scheduler, except for tweaks specific to processors and tools (e.g. VTune(TM) Performance Tools). The derived template class custom_scheduler<SchedulerTraits> fills in the tweaks.

Definition at line 120 of file scheduler.h.

Constructor & Destructor Documentation

◆ generic_scheduler()

tbb::internal::generic_scheduler::generic_scheduler ( market m)
protected

Definition at line 84 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_CONTEXT_ARG, tbb::internal::__TBB_load_relaxed(), acquire_task_pool(), allocate_task(), assert_task_pool_valid(), tbb::internal::assert_task_valid(), tbb::internal::es_task_proxy, tbb::internal::arena_slot_line1::head, tbb::internal::governor::is_set(), ITT_SYNC_CREATE, min_task_pool_size, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::scheduler_state::my_properties, my_return_list, tbb::internal::arena_slot_line2::my_task_pool_size, tbb::internal::scheduler_properties::outermost, tbb::task::prefix(), tbb::task::ready, release_task_pool(), tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line2::task_pool_ptr.

85  : my_market(&m)
86  , my_random(this)
87  , my_ref_count(1)
88  , my_small_task_count(1) // Extra 1 is a guard reference
89 #if __TBB_SURVIVE_THREAD_SWITCH && TBB_USE_ASSERT
90  , my_cilk_state(cs_none)
91 #endif /* __TBB_SURVIVE_THREAD_SWITCH && TBB_USE_ASSERT */
92 {
93  __TBB_ASSERT( !my_arena_index, "constructor expects the memory being zero-initialized" );
94  __TBB_ASSERT( governor::is_set(NULL), "scheduler is already initialized for this thread" );
95 
96  my_innermost_running_task = my_dummy_task = &allocate_task( sizeof(task), __TBB_CONTEXT_ARG(NULL, &the_dummy_context) );
97 #if __TBB_PREVIEW_CRITICAL_TASKS
98  my_properties.has_taken_critical_task = false;
99 #endif
100  my_properties.outermost = true;
101 #if __TBB_TASK_PRIORITY
102  my_ref_top_priority = &m.my_global_top_priority;
103  my_ref_reload_epoch = &m.my_global_reload_epoch;
104 #endif /* __TBB_TASK_PRIORITY */
105 #if __TBB_TASK_GROUP_CONTEXT
106  // Sync up the local cancellation state with the global one. No need for fence here.
107  my_context_state_propagation_epoch = the_context_state_propagation_epoch;
108  my_context_list_head.my_prev = &my_context_list_head;
109  my_context_list_head.my_next = &my_context_list_head;
110  ITT_SYNC_CREATE(&my_context_list_mutex, SyncType_Scheduler, SyncObj_ContextsList);
111 #endif /* __TBB_TASK_GROUP_CONTEXT */
112  ITT_SYNC_CREATE(&my_dummy_task->prefix().ref_count, SyncType_Scheduler, SyncObj_WorkerLifeCycleMgmt);
113  ITT_SYNC_CREATE(&my_return_list, SyncType_Scheduler, SyncObj_TaskReturnList);
114 }
market * my_market
The market I am in.
Definition: scheduler.h:155
bool outermost
Indicates that a scheduler is on outermost level.
Definition: scheduler.h:53
scheduler_properties my_properties
Definition: scheduler.h:91
task * my_dummy_task
Fake root task created by slave threads.
Definition: scheduler.h:169
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:946
#define ITT_SYNC_CREATE(obj, type, name)
Definition: itt_notify.h:119
long my_ref_count
Reference count for scheduler.
Definition: scheduler.h:173
static bool is_set(generic_scheduler *s)
Used to check validity of the local scheduler TLS contents.
Definition: governor.cpp:120
task & allocate_task(size_t number_of_bytes, __TBB_CONTEXT_ARG(task *parent, task_group_context *context))
Allocate task object, either from the heap or a free list.
Definition: scheduler.cpp:300
task * my_return_list
List of small tasks that have been returned to this scheduler by other schedulers.
Definition: scheduler.h:383
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
FastRandom my_random
Random number generator used for picking a random victim from which to steal.
Definition: scheduler.h:158
#define __TBB_CONTEXT_ARG(arg1, context)
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:68
__TBB_atomic intptr_t my_small_task_count
Number of small tasks that have been allocated by this scheduler.
Definition: scheduler.h:379
task * my_innermost_running_task
Innermost task whose task::execute() is running. A dummy task on the outermost level.
Definition: scheduler.h:77
Here is the call graph for this function:

Member Function Documentation

◆ acquire_task_pool()

void tbb::internal::generic_scheduler::acquire_task_pool ( ) const
inline

Locks the local task pool.

Garbles my_arena_slot->task_pool for the duration of the lock. Requires correctly set my_arena_slot->task_pool_ptr.

ATTENTION: This method is mostly the same as generic_scheduler::lock_task_pool(), with a little different logic of slot state checks (slot is either locked or points to our task pool). Thus if either of them is changed, consider changing the counterpart as well.

Definition at line 456 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::as_atomic(), is_task_pool_published(), ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena::my_slots, tbb::internal::atomic_backoff::pause(), tbb::internal::arena_slot_line1::task_pool, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by cleanup_master(), enqueue(), generic_scheduler(), get_task(), and prepare_task_pool().

456  {
457  if ( !is_task_pool_published() )
458  return; // we are not in arena - nothing to lock
459  bool sync_prepare_done = false;
460  for( atomic_backoff b;;b.pause() ) {
461 #if TBB_USE_ASSERT
462  __TBB_ASSERT( my_arena_slot == my_arena->my_slots + my_arena_index, "invalid arena slot index" );
463  // Local copy of the arena slot task pool pointer is necessary for the next
464  // assertion to work correctly to exclude asynchronous state transition effect.
465  task** tp = my_arena_slot->task_pool;
466  __TBB_ASSERT( tp == LockedTaskPool || tp == my_arena_slot->task_pool_ptr, "slot ownership corrupt?" );
467 #endif
470  {
471  // We acquired our own slot
472  ITT_NOTIFY(sync_acquired, my_arena_slot);
473  break;
474  }
475  else if( !sync_prepare_done ) {
476  // Start waiting
477  ITT_NOTIFY(sync_prepare, my_arena_slot);
478  sync_prepare_done = true;
479  }
480  // Someone else acquired a lock, so pause and do exponential backoff.
481  }
482  __TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "not really acquired task pool" );
483 } // generic_scheduler::acquire_task_pool
arena_slot my_slots[1]
Definition: arena.h:296
atomic< T > & as_atomic(T &t)
Definition: atomic.h:543
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
#define LockedTaskPool
Definition: scheduler.h:43
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:68
Here is the call graph for this function:
Here is the caller graph for this function:

◆ allocate_task()

task & tbb::internal::generic_scheduler::allocate_task ( size_t  number_of_bytes,
__TBB_CONTEXT_ARG(task *parent, task_group_context *context)   
)

Allocate task object, either from the heap or a free list.

Returns uninitialized task object with initialized prefix.

Definition at line 300 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_cl_prefetch, __TBB_ISOLATION_EXPR, tbb::internal::task_prefix::affinity, tbb::task::allocated, tbb::internal::task_prefix::context, tbb::internal::task_prefix::depth, tbb::internal::task_prefix::extra_state, tbb::task::freed, GATHER_STATISTIC, tbb::internal::task_prefix::isolation, ITT_NOTIFY, my_free_list, my_return_list, my_small_task_count, tbb::internal::NFS_Allocate(), tbb::internal::no_isolation, tbb::internal::task_prefix::owner, p, parent, tbb::internal::task_prefix::parent, tbb::task::prefix(), quick_task_size, tbb::internal::task_prefix::ref_count, tbb::internal::task_prefix::state, tbb::task::state(), and tbb::internal::task_prefix_reservation_size.

Referenced by tbb::internal::allocate_additional_child_of_proxy::allocate(), tbb::internal::allocate_continuation_proxy::allocate(), tbb::internal::allocate_child_proxy::allocate(), generic_scheduler(), and prepare_for_spawning().

301  {
302  GATHER_STATISTIC(++my_counters.active_tasks);
303  task *t;
304  if( number_of_bytes<=quick_task_size ) {
305 #if __TBB_HOARD_NONLOCAL_TASKS
306  if( (t = my_nonlocal_free_list) ) {
307  GATHER_STATISTIC(--my_counters.free_list_length);
308  __TBB_ASSERT( t->state()==task::freed, "free list of tasks is corrupted" );
309  my_nonlocal_free_list = t->prefix().next;
310  } else
311 #endif
312  if( (t = my_free_list) ) {
313  GATHER_STATISTIC(--my_counters.free_list_length);
314  __TBB_ASSERT( t->state()==task::freed, "free list of tasks is corrupted" );
315  my_free_list = t->prefix().next;
316  } else if( my_return_list ) {
317  // No fence required for read of my_return_list above, because __TBB_FetchAndStoreW has a fence.
318  t = (task*)__TBB_FetchAndStoreW( &my_return_list, 0 ); // with acquire
319  __TBB_ASSERT( t, "another thread emptied the my_return_list" );
320  __TBB_ASSERT( t->prefix().origin==this, "task returned to wrong my_return_list" );
321  ITT_NOTIFY( sync_acquired, &my_return_list );
322  my_free_list = t->prefix().next;
323  } else {
325 #if __TBB_COUNT_TASK_NODES
326  ++my_task_node_count;
327 #endif /* __TBB_COUNT_TASK_NODES */
328  t->prefix().origin = this;
329  t->prefix().next = 0;
331  }
332 #if __TBB_PREFETCHING
333  task *t_next = t->prefix().next;
334  if( !t_next ) { // the task was last in the list
335 #if __TBB_HOARD_NONLOCAL_TASKS
336  if( my_free_list )
337  t_next = my_free_list;
338  else
339 #endif
340  if( my_return_list ) // enable prefetching, gives speedup
341  t_next = my_free_list = (task*)__TBB_FetchAndStoreW( &my_return_list, 0 );
342  }
343  if( t_next ) { // gives speedup for both cache lines
344  __TBB_cl_prefetch(t_next);
345  __TBB_cl_prefetch(&t_next->prefix());
346  }
347 #endif /* __TBB_PREFETCHING */
348  } else {
349  GATHER_STATISTIC(++my_counters.big_tasks);
350  t = (task*)((char*)NFS_Allocate( 1, task_prefix_reservation_size+number_of_bytes, NULL ) + task_prefix_reservation_size );
351 #if __TBB_COUNT_TASK_NODES
352  ++my_task_node_count;
353 #endif /* __TBB_COUNT_TASK_NODES */
354  t->prefix().origin = NULL;
355  }
356  task_prefix& p = t->prefix();
357 #if __TBB_TASK_GROUP_CONTEXT
358  p.context = context;
359 #endif /* __TBB_TASK_GROUP_CONTEXT */
360  // Obsolete. But still in use, so has to be assigned correct value here.
361  p.owner = this;
362  p.ref_count = 0;
363  // Obsolete. Assign some not outrageously out-of-place value for a while.
364  p.depth = 0;
365  p.parent = parent;
366  // In TBB 2.1 and later, the constructor for task sets extra_state to indicate the version of the tbb/task.h header.
367  // In TBB 2.0 and earlier, the constructor leaves extra_state as zero.
368  p.extra_state = 0;
369  p.affinity = 0;
370  p.state = task::allocated;
371  __TBB_ISOLATION_EXPR( p.isolation = no_isolation );
372  return *t;
373 }
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:946
#define __TBB_ISOLATION_EXPR(isolation)
task object is freshly allocated or recycled.
Definition: task.h:617
#define __TBB_cl_prefetch(p)
Definition: mic_common.h:33
#define GATHER_STATISTIC(x)
const size_t task_prefix_reservation_size
Number of bytes reserved for a task prefix.
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id parent
task * my_return_list
List of small tasks that have been returned to this scheduler by other schedulers.
Definition: scheduler.h:383
static const size_t quick_task_size
If sizeof(task) is <=quick_task_size, it is handled on a free list instead of malloc&#39;d.
Definition: scheduler.h:127
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
task object is on free list, or is going to be put there, or was just taken off.
Definition: task.h:619
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
task * my_free_list
Free list of small tasks that can be reused.
Definition: scheduler.h:161
const isolation_tag no_isolation
Definition: task.h:125
__TBB_atomic intptr_t my_small_task_count
Number of small tasks that have been allocated by this scheduler.
Definition: scheduler.h:379
void const char const char int ITT_FORMAT __itt_group_sync p
void *__TBB_EXPORTED_FUNC NFS_Allocate(size_t n_element, size_t element_size, void *hint)
Allocate memory on cache/sector line boundary.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ assert_task_pool_valid()

void tbb::internal::generic_scheduler::assert_task_pool_valid ( ) const
inline

Definition at line 319 of file scheduler.h.

References __TBB_CONTEXT_ARG, __TBB_override, tbb::internal::first(), and parent.

Referenced by tbb::internal::custom_scheduler< SchedulerTraits >::allocate_scheduler(), enqueue(), generic_scheduler(), local_spawn(), prepare_task_pool(), and tbb::task::self().

319 {}
Here is the call graph for this function:
Here is the caller graph for this function:

◆ attach_arena()

void tbb::internal::generic_scheduler::attach_arena ( arena a,
size_t  index,
bool  is_master 
)

Definition at line 36 of file arena.cpp.

References __TBB_ASSERT, attach_mailbox(), tbb::internal::mail_inbox::is_idle_state(), tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, tbb::internal::scheduler_state::my_inbox, tbb::internal::arena_base::my_market, my_market, tbb::internal::arena::my_slots, tbb::task::prefix(), and tbb::internal::mail_inbox::set_is_idle().

Referenced by create_master(), tbb::internal::governor::init_scheduler(), nested_arena_entry(), and tbb::internal::arena::process().

36  {
37  __TBB_ASSERT( a->my_market == my_market, NULL );
38  my_arena = a;
39  my_arena_index = index;
40  my_arena_slot = a->my_slots + index;
41  attach_mailbox( affinity_id(index+1) );
42  if ( is_master && my_inbox.is_idle_state( true ) ) {
43  // Master enters an arena with its own task to be executed. It means that master is not
44  // going to enter stealing loop and take affinity tasks.
45  my_inbox.set_is_idle( false );
46  }
47 #if __TBB_TASK_GROUP_CONTEXT
48  // Context to be used by root tasks by default (if the user has not specified one).
49  if( !is_master )
50  my_dummy_task->prefix().context = a->my_default_ctx;
51 #endif /* __TBB_TASK_GROUP_CONTEXT */
52 #if __TBB_TASK_PRIORITY
53  // In the current implementation master threads continue processing even when
54  // there are other masters with higher priority. Only TBB worker threads are
55  // redistributed between arenas based on the latters' priority. Thus master
56  // threads use arena's top priority as a reference point (in contrast to workers
57  // that use my_market->my_global_top_priority).
58  if( is_master ) {
59  my_ref_top_priority = &a->my_top_priority;
60  my_ref_reload_epoch = &a->my_reload_epoch;
61  }
62  my_local_reload_epoch = *my_ref_reload_epoch;
63  __TBB_ASSERT( !my_offloaded_tasks, NULL );
64 #endif /* __TBB_TASK_PRIORITY */
65 }
market * my_market
The market I am in.
Definition: scheduler.h:155
void set_is_idle(bool value)
Indicate whether thread that reads this mailbox is idle.
Definition: mailbox.h:211
task * my_dummy_task
Fake root task created by slave threads.
Definition: scheduler.h:169
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:946
unsigned short affinity_id
An id as used for specifying affinity.
Definition: task.h:120
bool is_idle_state(bool value) const
Indicate whether thread that reads this mailbox is idle.
Definition: mailbox.h:218
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:68
void attach_mailbox(affinity_id id)
Definition: scheduler.h:585
Here is the call graph for this function:
Here is the caller graph for this function:

◆ attach_mailbox()

void tbb::internal::generic_scheduler::attach_mailbox ( affinity_id  id)
inline

Definition at line 585 of file scheduler.h.

References __TBB_ASSERT, and id.

Referenced by attach_arena().

585  {
586  __TBB_ASSERT(id>0,NULL);
588  my_affinity_id = id;
589 }
void attach(mail_outbox &putter)
Attach inbox to a corresponding outbox.
Definition: mailbox.h:193
mail_outbox & mailbox(affinity_id id)
Get reference to mailbox corresponding to given affinity_id.
Definition: arena.h:204
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:89
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id id
Here is the caller graph for this function:

◆ can_steal()

bool tbb::internal::generic_scheduler::can_steal ( )
inline

Returns true if stealing is allowed.

Definition at line 191 of file scheduler.h.

References __TBB_get_bsp(), and __TBB_ISOLATION_EXPR.

191  {
192  int anchor;
193  // TODO IDEA: Add performance warning?
194 #if __TBB_ipf
195  return my_stealing_threshold < (uintptr_t)&anchor && (uintptr_t)__TBB_get_bsp() < my_rsb_stealing_threshold;
196 #else
197  return my_stealing_threshold < (uintptr_t)&anchor;
198 #endif
199  }
void * __TBB_get_bsp()
Retrieves the current RSE backing store pointer. IA64 specific.
uintptr_t my_stealing_threshold
Position in the call stack specifying its maximal filling when stealing is still allowed.
Definition: scheduler.h:138
Here is the call graph for this function:

◆ cleanup_master()

bool tbb::internal::generic_scheduler::cleanup_master ( bool  blocking_terminate)

Perform necessary cleanup when a master thread stops using TBB.

Definition at line 1300 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_with_release(), acquire_task_pool(), EmptyTaskPool, free_scheduler(), tbb::internal::arena_slot_line1::head, tbb::internal::governor::is_set(), is_task_pool_published(), leave_task_pool(), local_wait_for_all(), lock, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, my_market, tbb::internal::arena_slot_line1::my_scheduler, tbb::internal::arena::my_slots, tbb::internal::NFS_Free(), tbb::internal::arena::on_thread_leaving(), tbb::internal::arena::ref_external, tbb::internal::market::release(), release_task_pool(), tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line1::task_pool.

Referenced by tbb::internal::governor::auto_terminate(), and tbb::internal::governor::terminate_scheduler().

1300  {
1301  arena* const a = my_arena;
1302  market * const m = my_market;
1303  __TBB_ASSERT( my_market, NULL );
1304  if( a && is_task_pool_published() ) {
1308  {
1309  // Local task pool is empty
1310  leave_task_pool();
1311  }
1312  else {
1313  // Master's local task pool may e.g. contain proxies of affinitized tasks.
1315  __TBB_ASSERT ( governor::is_set(this), "TLS slot is cleared before the task pool cleanup" );
1318  __TBB_ASSERT ( governor::is_set(this), "Other thread reused our TLS key during the task pool cleanup" );
1319  }
1320  }
1321 #if __TBB_ARENA_OBSERVER
1322  if( a )
1323  a->my_observers.notify_exit_observers( my_last_local_observer, /*worker=*/false );
1324 #endif
1325 #if __TBB_SCHEDULER_OBSERVER
1326  the_global_observer_list.notify_exit_observers( my_last_global_observer, /*worker=*/false );
1327 #endif /* __TBB_SCHEDULER_OBSERVER */
1328 #if _WIN32||_WIN64
1329  m->unregister_master( master_exec_resource );
1330 #endif /* _WIN32||_WIN64 */
1331  if( a ) {
1332  __TBB_ASSERT(a->my_slots+0 == my_arena_slot, NULL);
1333 #if __TBB_STATISTICS
1334  *my_arena_slot->my_counters += my_counters;
1335 #endif /* __TBB_STATISTICS */
1337  }
1338 #if __TBB_TASK_GROUP_CONTEXT
1339  else { // task_group_context ownership was not transferred to arena
1340  default_context()->~task_group_context();
1341  NFS_Free(default_context());
1342  }
1343  context_state_propagation_mutex_type::scoped_lock lock(the_context_state_propagation_mutex);
1344  my_market->my_masters.remove( *this );
1345  lock.release();
1346 #endif /* __TBB_TASK_GROUP_CONTEXT */
1347  my_arena_slot = NULL; // detached from slot
1348  free_scheduler(); // do not use scheduler state after this point
1349 
1350  if( a )
1351  a->on_thread_leaving<arena::ref_external>();
1352  // If there was an associated arena, it added a public market reference
1353  return m->release( /*is_public*/ a != NULL, blocking_terminate );
1354 }
market * my_market
The market I am in.
Definition: scheduler.h:155
void free_scheduler()
Destroy and deallocate this scheduler object.
Definition: scheduler.cpp:260
void release_task_pool() const
Unlocks the local task pool.
Definition: scheduler.cpp:485
virtual void local_wait_for_all(task &parent, task *child)=0
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
static const unsigned ref_external
Reference increment values for externals and workers.
Definition: arena.h:226
task * my_dummy_task
Fake root task created by slave threads.
Definition: scheduler.h:169
generic_scheduler * my_scheduler
Scheduler of the thread attached to the slot.
#define EmptyTaskPool
Definition: scheduler.h:42
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
void acquire_task_pool() const
Locks the local task pool.
Definition: scheduler.cpp:456
static bool is_set(generic_scheduler *s)
Used to check validity of the local scheduler TLS contents.
Definition: governor.cpp:120
void __TBB_EXPORTED_FUNC NFS_Free(void *)
Free memory allocated by NFS_Allocate.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void leave_task_pool()
Leave the task pool.
Definition: scheduler.cpp:1222
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void * lock
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
__TBB_atomic size_t head
Index of the first ready task in the deque.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ cleanup_worker()

void tbb::internal::generic_scheduler::cleanup_worker ( void arg,
bool  worker 
)
static

Perform necessary cleanup when a worker thread finishes.

Definition at line 1290 of file scheduler.cpp.

References __TBB_ASSERT, free_scheduler(), tbb::internal::scheduler_state::my_arena_slot, and s.

Referenced by tbb::internal::market::cleanup().

1290  {
1292  __TBB_ASSERT( !s.my_arena_slot, "cleaning up attached worker" );
1293 #if __TBB_SCHEDULER_OBSERVER
1294  if ( worker ) // can be called by master for worker, do not notify master twice
1295  the_global_observer_list.notify_exit_observers( s.my_last_global_observer, /*worker=*/true );
1296 #endif /* __TBB_SCHEDULER_OBSERVER */
1297  s.free_scheduler();
1298 }
void const char const char int ITT_FORMAT __itt_group_sync s
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
Here is the call graph for this function:
Here is the caller graph for this function:

◆ commit_relocated_tasks()

void tbb::internal::generic_scheduler::commit_relocated_tasks ( size_t  new_tail)
inline

Makes relocated tasks visible to thieves and releases the local task pool.

Obviously, the task pool must be locked when calling this method.

Definition at line 637 of file scheduler.h.

References __TBB_ASSERT, tbb::internal::__TBB_store_relaxed(), and __TBB_store_release.

Referenced by prepare_task_pool().

637  {
639  "Task pool must be locked when calling commit_relocated_tasks()" );
641  // Tail is updated last to minimize probability of a thread making arena
642  // snapshot being misguided into thinking that this task pool is empty.
643  __TBB_store_release( my_arena_slot->tail, new_tail );
645 }
void release_task_pool() const
Unlocks the local task pool.
Definition: scheduler.cpp:485
#define __TBB_store_release
Definition: tbb_machine.h:860
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:742
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
bool is_local_task_pool_quiescent() const
Definition: scheduler.h:551
__TBB_atomic size_t head
Index of the first ready task in the deque.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ commit_spawned_tasks()

void tbb::internal::generic_scheduler::commit_spawned_tasks ( size_t  new_tail)
inline

Makes newly spawned tasks visible to thieves.

Definition at line 628 of file scheduler.h.

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), ITT_NOTIFY, and sync_releasing.

Referenced by local_spawn().

628  {
629  __TBB_ASSERT ( new_tail <= my_arena_slot->my_task_pool_size, "task deque end was overwritten" );
630  // emit "task was released" signal
631  ITT_NOTIFY(sync_releasing, (void*)((uintptr_t)my_arena_slot+sizeof(uintptr_t)));
632  // Release fence is necessary to make sure that previously stored task pointers
633  // are visible to thieves.
635 }
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
Here is the call graph for this function:
Here is the caller graph for this function:

◆ create_master()

generic_scheduler * tbb::internal::generic_scheduler::create_master ( arena a)
static

Initialize a scheduler for a master thread.

Definition at line 1248 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::allocate_scheduler(), attach_arena(), tbb::task_group_context::default_traits, tbb::internal::market::global_market(), init_stack_info(), tbb::task_group_context::isolated, lock, tbb::internal::scheduler_properties::master, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, my_dummy_task, my_market, tbb::internal::scheduler_state::my_properties, tbb::internal::arena_slot_line1::my_scheduler, tbb::internal::NFS_Allocate(), tbb::task::prefix(), tbb::internal::market::release(), s, tbb::internal::governor::sign_on(), and tbb::internal::scheduler_properties::type.

Referenced by tbb::internal::governor::init_scheduler(), and tbb::internal::governor::init_scheduler_weak().

1248  {
1249  // add an internal market reference; the public reference is possibly added in create_arena
1250  generic_scheduler* s = allocate_scheduler( market::global_market(/*is_public=*/false) );
1251  __TBB_ASSERT( !s->my_arena, NULL );
1252  __TBB_ASSERT( s->my_market, NULL );
1253  task& t = *s->my_dummy_task;
1254  s->my_properties.type = scheduler_properties::master;
1255  t.prefix().ref_count = 1;
1256 #if __TBB_TASK_GROUP_CONTEXT
1257  t.prefix().context = new ( NFS_Allocate(1, sizeof(task_group_context), NULL) )
1259 #if __TBB_FP_CONTEXT
1260  s->default_context()->capture_fp_settings();
1261 #endif
1262  // Do not call init_stack_info before the scheduler is set as master or worker.
1263  s->init_stack_info();
1264  context_state_propagation_mutex_type::scoped_lock lock(the_context_state_propagation_mutex);
1265  s->my_market->my_masters.push_front( *s );
1266  lock.release();
1267 #endif /* __TBB_TASK_GROUP_CONTEXT */
1268  if( a ) {
1269  // Master thread always occupies the first slot
1270  s->attach_arena( a, /*index*/0, /*is_master*/true );
1271  s->my_arena_slot->my_scheduler = s;
1272  a->my_default_ctx = s->default_context(); // also transfers implied ownership
1273  }
1274  __TBB_ASSERT( s->my_arena_index == 0, "Master thread must occupy the first slot in its arena" );
1275  governor::sign_on(s);
1276 
1277 #if _WIN32||_WIN64
1278  s->my_market->register_master( s->master_exec_resource );
1279 #endif /* _WIN32||_WIN64 */
1280  // Process any existing observers.
1281 #if __TBB_ARENA_OBSERVER
1282  __TBB_ASSERT( !a || a->my_observers.empty(), "Just created arena cannot have any observers associated with it" );
1283 #endif
1284 #if __TBB_SCHEDULER_OBSERVER
1285  the_global_observer_list.notify_entry_observers( s->my_last_global_observer, /*worker=*/false );
1286 #endif /* __TBB_SCHEDULER_OBSERVER */
1287  return s;
1288 }
generic_scheduler * allocate_scheduler(market &m)
Definition: scheduler.cpp:37
static void sign_on(generic_scheduler *s)
Register TBB scheduler instance in thread-local storage.
Definition: governor.cpp:124
void const char const char int ITT_FORMAT __itt_group_sync s
static market & global_market(bool is_public, unsigned max_num_workers=0, size_t stack_size=0)
Factory method creating new market object.
Definition: market.cpp:96
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void * lock
void *__TBB_EXPORTED_FUNC NFS_Allocate(size_t n_element, size_t element_size, void *hint)
Allocate memory on cache/sector line boundary.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ create_worker()

generic_scheduler * tbb::internal::generic_scheduler::create_worker ( market m,
size_t  index 
)
static

Initialize a scheduler for a worker thread.

Definition at line 1235 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::allocate_scheduler(), init_stack_info(), tbb::internal::scheduler_state::my_arena_index, my_dummy_task, tbb::internal::scheduler_state::my_properties, tbb::task::prefix(), s, tbb::internal::governor::sign_on(), tbb::internal::scheduler_properties::type, and tbb::internal::scheduler_properties::worker.

Referenced by tbb::internal::market::create_one_job().

1235  {
1237  __TBB_ASSERT(index, "workers should have index > 0");
1238  s->my_arena_index = index; // index is not a real slot in arena yet
1239  s->my_dummy_task->prefix().ref_count = 2;
1240  s->my_properties.type = scheduler_properties::worker;
1241  // Do not call init_stack_info before the scheduler is set as master or worker.
1242  s->init_stack_info();
1243  governor::sign_on(s);
1244  return s;
1245 }
generic_scheduler * allocate_scheduler(market &m)
Definition: scheduler.cpp:37
static void sign_on(generic_scheduler *s)
Register TBB scheduler instance in thread-local storage.
Definition: governor.cpp:124
void const char const char int ITT_FORMAT __itt_group_sync s
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
Here is the call graph for this function:
Here is the caller graph for this function:

◆ deallocate_task()

void tbb::internal::generic_scheduler::deallocate_task ( task t)
inline

Return task object to the memory allocator.

Definition at line 601 of file scheduler.h.

References tbb::internal::task_prefix::extra_state, tbb::internal::task_prefix::next, tbb::internal::NFS_Free(), p, tbb::internal::poison_pointer(), tbb::task::prefix(), tbb::internal::task_prefix::state, and tbb::internal::task_prefix_reservation_size.

Referenced by free_nonlocal_small_task(), and free_scheduler().

601  {
602 #if TBB_USE_ASSERT
603  task_prefix& p = t.prefix();
604  p.state = 0xFF;
605  p.extra_state = 0xFF;
606  poison_pointer(p.next);
607 #endif /* TBB_USE_ASSERT */
609 #if __TBB_COUNT_TASK_NODES
610  --my_task_node_count;
611 #endif /* __TBB_COUNT_TASK_NODES */
612 }
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
const size_t task_prefix_reservation_size
Number of bytes reserved for a task prefix.
void __TBB_EXPORTED_FUNC NFS_Free(void *)
Free memory allocated by NFS_Allocate.
void const char const char int ITT_FORMAT __itt_group_sync p
Here is the call graph for this function:
Here is the caller graph for this function:

◆ enqueue()

void tbb::internal::generic_scheduler::enqueue ( task t,
void reserved 
)
virtual

For internal use only.

Implements tbb::internal::scheduler.

Definition at line 710 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_ISOLATION_ARG, __TBB_ISOLATION_EXPR, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_relaxed(), acquire_task_pool(), tbb::internal::arena::advertise_new_work(), assert_task_pool_valid(), tbb::internal::assert_task_valid(), tbb::internal::fast_reverse_vector< T, max_segments >::copy_memory(), tbb::internal::arena::enqueue_task(), tbb::internal::arena_slot::fill_with_canary_pattern(), GATHER_STATISTIC, get_task(), tbb::internal::arena_slot_line1::head, is_local_task_pool_quiescent(), is_proxy(), is_task_pool_published(), leave_task_pool(), tbb::internal::governor::local_scheduler(), tbb::internal::max(), min_task_pool_size, tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::scheduler_state::my_innermost_running_task, my_market, tbb::internal::arena_base::my_num_workers_requested, my_random, tbb::task::note_affinity(), tbb::internal::num_priority_levels, p, tbb::internal::poison_pointer(), tbb::task::prefix(), prepare_task_pool(), publish_task_pool(), tbb::internal::fast_reverse_vector< T, max_segments >::push_back(), tbb::task::ready, release_task_pool(), s, tbb::internal::fast_reverse_vector< T, max_segments >::size(), tbb::internal::arena_slot_line2::tail, tbb::internal::arena_slot_line2::task_pool_ptr, tbb::internal::arena::wakeup, and tbb::internal::arena::work_spawned.

710  {
712  // these redirections are due to bw-compatibility, consider reworking some day
713  __TBB_ASSERT( s->my_arena, "thread is not in any arena" );
714  s->my_arena->enqueue_task(t, (intptr_t)prio, s->my_random );
715 }
void const char const char int ITT_FORMAT __itt_group_sync s
static generic_scheduler * local_scheduler()
Obtain the thread-local instance of the TBB scheduler.
Definition: governor.h:122
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
Here is the call graph for this function:

◆ free_nonlocal_small_task()

void tbb::internal::generic_scheduler::free_nonlocal_small_task ( task t)

Free a small task t that that was allocated by a different scheduler.

Definition at line 375 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_cl_evict, __TBB_FetchAndDecrementWrelease, tbb::internal::as_atomic(), deallocate_task(), tbb::task::freed, ITT_NOTIFY, tbb::internal::NFS_Free(), plugged_return_list(), tbb::task::prefix(), s, tbb::task::state(), and sync_releasing.

Referenced by free_scheduler().

375  {
376  __TBB_ASSERT( t.state()==task::freed, NULL );
377  generic_scheduler& s = *static_cast<generic_scheduler*>(t.prefix().origin);
378  __TBB_ASSERT( &s!=this, NULL );
379  for(;;) {
380  task* old = s.my_return_list;
381  if( old==plugged_return_list() )
382  break;
383  // Atomically insert t at head of s.my_return_list
384  t.prefix().next = old;
385  ITT_NOTIFY( sync_releasing, &s.my_return_list );
386  if( as_atomic(s.my_return_list).compare_and_swap(&t, old )==old ) {
387 #if __TBB_PREFETCHING
388  __TBB_cl_evict(&t.prefix());
389  __TBB_cl_evict(&t);
390 #endif
391  return;
392  }
393  }
394  deallocate_task(t);
395  if( __TBB_FetchAndDecrementWrelease( &s.my_small_task_count )==1 ) {
396  // We freed the last task allocated by scheduler s, so it's our responsibility
397  // to free the scheduler.
398  NFS_Free( &s );
399  }
400 }
void deallocate_task(task &t)
Return task object to the memory allocator.
Definition: scheduler.h:601
void const char const char int ITT_FORMAT __itt_group_sync s
#define __TBB_FetchAndDecrementWrelease(P)
Definition: tbb_machine.h:314
static task * plugged_return_list()
Special value used to mark my_return_list as not taking any more entries.
Definition: scheduler.h:376
void __TBB_EXPORTED_FUNC NFS_Free(void *)
Free memory allocated by NFS_Allocate.
atomic< T > & as_atomic(T &t)
Definition: atomic.h:543
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
#define __TBB_cl_evict(p)
Definition: mic_common.h:34
task object is on free list, or is going to be put there, or was just taken off.
Definition: task.h:619
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
Here is the call graph for this function:
Here is the caller graph for this function:

◆ free_scheduler()

void tbb::internal::generic_scheduler::free_scheduler ( )

Destroy and deallocate this scheduler object.

Definition at line 260 of file scheduler.cpp.

References __TBB_ASSERT, deallocate_task(), free_nonlocal_small_task(), tbb::internal::scheduler_state::my_arena_slot, my_free_list, my_market, tbb::internal::scheduler_state::my_properties, my_return_list, my_small_task_count, tbb::internal::task_prefix::next, tbb::internal::NFS_Free(), tbb::internal::task_prefix::origin, p, plugged_return_list(), tbb::task::prefix(), and tbb::internal::governor::sign_off().

Referenced by cleanup_master(), and cleanup_worker().

260  {
261  __TBB_ASSERT( !my_arena_slot, NULL );
262 #if __TBB_PREVIEW_CRITICAL_TASKS
263  __TBB_ASSERT( !my_properties.has_taken_critical_task, "Critical tasks miscount." );
264 #endif
265 #if __TBB_TASK_GROUP_CONTEXT
266  cleanup_local_context_list();
267 #endif /* __TBB_TASK_GROUP_CONTEXT */
268  free_task<small_local_task>( *my_dummy_task );
269 
270 #if __TBB_HOARD_NONLOCAL_TASKS
271  while( task* t = my_nonlocal_free_list ) {
272  task_prefix& p = t->prefix();
273  my_nonlocal_free_list = p.next;
274  __TBB_ASSERT( p.origin && p.origin!=this, NULL );
276  }
277 #endif
278  // k accounts for a guard reference and each task that we deallocate.
279  intptr_t k = 1;
280  for(;;) {
281  while( task* t = my_free_list ) {
282  my_free_list = t->prefix().next;
283  deallocate_task(*t);
284  ++k;
285  }
287  break;
288  my_free_list = (task*)__TBB_FetchAndStoreW( &my_return_list, (intptr_t)plugged_return_list() );
289  }
290 #if __TBB_COUNT_TASK_NODES
291  my_market->update_task_node_count( my_task_node_count );
292 #endif /* __TBB_COUNT_TASK_NODES */
293  // Update my_small_task_count last. Doing so sooner might cause another thread to free *this.
294  __TBB_ASSERT( my_small_task_count>=k, "my_small_task_count corrupted" );
295  governor::sign_off(this);
296  if( __TBB_FetchAndAddW( &my_small_task_count, -k )==k )
297  NFS_Free( this );
298 }
market * my_market
The market I am in.
Definition: scheduler.h:155
scheduler_properties my_properties
Definition: scheduler.h:91
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:946
void deallocate_task(task &t)
Return task object to the memory allocator.
Definition: scheduler.h:601
static task * plugged_return_list()
Special value used to mark my_return_list as not taking any more entries.
Definition: scheduler.h:376
void free_nonlocal_small_task(task &t)
Free a small task t that that was allocated by a different scheduler.
Definition: scheduler.cpp:375
void __TBB_EXPORTED_FUNC NFS_Free(void *)
Free memory allocated by NFS_Allocate.
task * my_return_list
List of small tasks that have been returned to this scheduler by other schedulers.
Definition: scheduler.h:383
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
static void sign_off(generic_scheduler *s)
Unregister TBB scheduler instance from thread-local storage.
Definition: governor.cpp:145
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
task * my_free_list
Free list of small tasks that can be reused.
Definition: scheduler.h:161
__TBB_atomic intptr_t my_small_task_count
Number of small tasks that have been allocated by this scheduler.
Definition: scheduler.h:379
void const char const char int ITT_FORMAT __itt_group_sync p
Here is the call graph for this function:
Here is the caller graph for this function:

◆ free_task()

template<free_task_hint hint>
void tbb::internal::generic_scheduler::free_task ( task t)

Put task on free list.

Does not call destructor.

Definition at line 648 of file scheduler.h.

References __TBB_ASSERT, tbb::task::allocated, tbb::internal::task_prefix::depth, tbb::task::executing, tbb::task::freed, GATHER_STATISTIC, tbb::internal::cpu_ctl_env::get_env(), h, tbb::internal::is_critical(), ITT_TASK_BEGIN, ITT_TASK_END, tbb::internal::local_task, tbb::task_group_context::my_cpu_ctl_env, tbb::task_group_context::my_name, tbb::internal::task_prefix::next, tbb::internal::no_cache, tbb::internal::task_prefix::origin, tbb::internal::task_prefix::owner, p, tbb::internal::poison_pointer(), poison_value, tbb::task::prefix(), tbb::internal::punned_cast(), tbb::task::ready, tbb::internal::task_prefix::ref_count, tbb::internal::cpu_ctl_env::set_env(), tbb::internal::small_local_task, tbb::internal::small_task, tbb::internal::task_prefix::state, and tbb::task::state().

Referenced by tbb::interface5::internal::task_base::destroy(), tbb::internal::allocate_additional_child_of_proxy::free(), tbb::internal::allocate_continuation_proxy::free(), tbb::internal::allocate_child_proxy::free(), and tbb::internal::auto_empty_task::~auto_empty_task().

648  {
649 #if __TBB_HOARD_NONLOCAL_TASKS
650  static const int h = hint&(~local_task);
651 #else
652  static const free_task_hint h = hint;
653 #endif
654  GATHER_STATISTIC(--my_counters.active_tasks);
655  task_prefix& p = t.prefix();
656  // Verify that optimization hints are correct.
657  __TBB_ASSERT( h!=small_local_task || p.origin==this, NULL );
658  __TBB_ASSERT( !(h&small_task) || p.origin, NULL );
659  __TBB_ASSERT( !(h&local_task) || (!p.origin || uintptr_t(p.origin) > uintptr_t(4096)), "local_task means allocated");
660  poison_value(p.depth);
661  poison_value(p.ref_count);
662  poison_pointer(p.owner);
663  __TBB_ASSERT( 1L<<t.state() & (1L<<task::executing|1L<<task::allocated), NULL );
664  p.state = task::freed;
665  if( h==small_local_task || p.origin==this ) {
666  GATHER_STATISTIC(++my_counters.free_list_length);
667  p.next = my_free_list;
668  my_free_list = &t;
669  } else if( !(h&local_task) && p.origin && uintptr_t(p.origin) < uintptr_t(4096) ) {
670  // a special value reserved for future use, do nothing since
671  // origin is not pointing to a scheduler instance
672  } else if( !(h&local_task) && p.origin ) {
673  GATHER_STATISTIC(++my_counters.free_list_length);
674 #if __TBB_HOARD_NONLOCAL_TASKS
675  if( !(h&no_cache) ) {
676  p.next = my_nonlocal_free_list;
677  my_nonlocal_free_list = &t;
678  } else
679 #endif
681  } else {
682  GATHER_STATISTIC(--my_counters.big_tasks);
683  deallocate_task(t);
684  }
685 }
task object is freshly allocated or recycled.
Definition: task.h:617
free_task_hint
Optimization hint to free_task that enables it omit unnecessary tests and code.
Bitwise-OR of local_task and small_task.
void deallocate_task(task &t)
Return task object to the memory allocator.
Definition: scheduler.h:601
#define GATHER_STATISTIC(x)
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
void free_nonlocal_small_task(task &t)
Free a small task t that that was allocated by a different scheduler.
Definition: scheduler.cpp:375
task is running, and will be destroyed after method execute() completes.
Definition: task.h:611
Disable caching for a small task.
Task is known to be a small task.
Task is known to have been allocated by this scheduler.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
task object is on free list, or is going to be put there, or was just taken off.
Definition: task.h:619
task * my_free_list
Free list of small tasks that can be reused.
Definition: scheduler.h:161
void const char const char int ITT_FORMAT __itt_group_sync p
#define poison_value(g)
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function h
Here is the call graph for this function:
Here is the caller graph for this function:

◆ get_mailbox_task()

task * tbb::internal::generic_scheduler::get_mailbox_task ( __TBB_ISOLATION_EXPR(isolation_tag isolation)  )

Attempt to get a task from the mailbox.

Gets a task only if it has not been executed by its sender or a thief that has stolen it from the sender's task pool. Otherwise returns NULL.

This method is intended to be used only by the thread extracting the proxy from its mailbox. (In contrast to local task pool, mailbox can be read only by its owner).

Definition at line 1196 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_ISOLATION_EXPR, tbb::internal::es_task_is_stolen, ITT_NOTIFY, tbb::internal::task_proxy::mailbox_bit, tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_inbox, and tbb::internal::mail_inbox::pop().

1196  {
1197  __TBB_ASSERT( my_affinity_id>0, "not in arena" );
1198  while ( task_proxy* const tp = my_inbox.pop( __TBB_ISOLATION_EXPR( isolation ) ) ) {
1199  if ( task* result = tp->extract_task<task_proxy::mailbox_bit>() ) {
1200  ITT_NOTIFY( sync_acquired, my_inbox.outbox() );
1201  result->prefix().extra_state |= es_task_is_stolen;
1202  return result;
1203  }
1204  // We have exclusive access to the proxy, and can destroy it.
1205  free_task<no_cache_small_task>(*tp);
1206  }
1207  return NULL;
1208 }
static const intptr_t mailbox_bit
Definition: mailbox.h:31
#define __TBB_ISOLATION_EXPR(isolation)
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:89
task_proxy * pop(__TBB_ISOLATION_EXPR(isolation_tag isolation))
Get next piece of mail, or NULL if mailbox is empty.
Definition: mailbox.h:202
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
Set if the task has been stolen.
Here is the call graph for this function:

◆ get_task() [1/2]

task * tbb::internal::generic_scheduler::get_task ( __TBB_ISOLATION_EXPR(isolation_tag isolation)  )
inline

Get a task from the local pool.

Called only by the pool owner. Returns the pointer to the task or NULL if a suitable task is not found. Resets the pool if it is empty.

Definition at line 974 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_control_consistency_helper, __TBB_ISOLATION_EXPR, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_relaxed(), tbb::internal::__TBB_store_with_release(), acquire_task_pool(), tbb::internal::arena::advertise_new_work(), tbb::internal::assert_task_valid(), tbb::atomic_fence(), tbb::internal::arena_slot_line1::head, is_quiescent_local_task_pool_reset(), is_task_pool_published(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::scheduler_state::my_innermost_running_task, tbb::task::note_affinity(), tbb::internal::poison_pointer(), publish_task_pool(), release_task_pool(), reset_task_pool_and_leave(), tbb::internal::arena_slot_line2::tail, tbb::internal::arena_slot_line2::task_pool_ptr, and tbb::internal::arena::wakeup.

Referenced by enqueue().

974  {
976  // The current task position in the task pool.
977  size_t T0 = __TBB_load_relaxed( my_arena_slot->tail );
978  // The bounds of available tasks in the task pool. H0 is only used when the head bound is reached.
979  size_t H0 = (size_t)-1, T = T0;
980  task* result = NULL;
981  bool task_pool_empty = false;
982  __TBB_ISOLATION_EXPR( bool tasks_omitted = false );
983  do {
984  __TBB_ASSERT( !result, NULL );
986  atomic_fence();
987  if ( (intptr_t)__TBB_load_relaxed( my_arena_slot->head ) > (intptr_t)T ) {
990  if ( (intptr_t)H0 > (intptr_t)T ) {
991  // The thief has not backed off - nothing to grab.
994  && H0 == T + 1, "victim/thief arbitration algorithm failure" );
996  // No tasks in the task pool.
997  task_pool_empty = true;
998  break;
999  } else if ( H0 == T ) {
1000  // There is only one task in the task pool.
1002  task_pool_empty = true;
1003  } else {
1004  // Release task pool if there are still some tasks.
1005  // After the release, the tail will be less than T, thus a thief
1006  // will not attempt to get a task at position T.
1008  }
1009  }
1010  __TBB_control_consistency_helper(); // on my_arena_slot->head
1011 #if __TBB_TASK_ISOLATION
1012  result = get_task( T, isolation, tasks_omitted );
1013  if ( result ) {
1015  break;
1016  } else if ( !tasks_omitted ) {
1018  __TBB_ASSERT( T0 == T+1, NULL );
1019  T0 = T;
1020  }
1021 #else
1022  result = get_task( T );
1023 #endif /* __TBB_TASK_ISOLATION */
1024  } while ( !result && !task_pool_empty );
1025 
1026 #if __TBB_TASK_ISOLATION
1027  if ( tasks_omitted ) {
1028  if ( task_pool_empty ) {
1029  // All tasks have been checked. The task pool should be in reset state.
1030  // We just restore the bounds for the available tasks.
1031  // TODO: Does it have sense to move them to the beginning of the task pool?
1033  if ( result ) {
1034  // If we have a task, it should be at H0 position.
1035  __TBB_ASSERT( H0 == T, NULL );
1036  ++H0;
1037  }
1038  __TBB_ASSERT( H0 <= T0, NULL );
1039  if ( H0 < T0 ) {
1040  // Restore the task pool if there are some tasks.
1043  // The release fence is used in publish_task_pool.
1045  // Synchronize with snapshot as we published some tasks.
1047  }
1048  } else {
1049  // A task has been obtained. We need to make a hole in position T.
1051  __TBB_ASSERT( result, NULL );
1052  my_arena_slot->task_pool_ptr[T] = NULL;
1054  // Synchronize with snapshot as we published some tasks.
1055  // TODO: consider some approach not to call wakeup for each time. E.g. check if the tail reached the head.
1057  }
1058 
1059  // Now it is safe to call note_affinity because the task pool is restored.
1060  if ( my_innermost_running_task == result ) {
1061  assert_task_valid( result );
1062  result->note_affinity( my_affinity_id );
1063  }
1064  }
1065 #endif /* __TBB_TASK_ISOLATION */
1066  __TBB_ASSERT( (intptr_t)__TBB_load_relaxed( my_arena_slot->tail ) >= 0, NULL );
1067  __TBB_ASSERT( result || __TBB_ISOLATION_EXPR( tasks_omitted || ) is_quiescent_local_task_pool_reset(), NULL );
1068  return result;
1069 } // generic_scheduler::get_task
task * get_task(__TBB_ISOLATION_EXPR(isolation_tag isolation))
Get a task from the local pool.
Definition: scheduler.cpp:974
void release_task_pool() const
Unlocks the local task pool.
Definition: scheduler.cpp:485
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:742
#define __TBB_ISOLATION_EXPR(isolation)
void reset_task_pool_and_leave()
Resets head and tail indices to 0, and leaves task pool.
Definition: scheduler.h:620
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
void acquire_task_pool() const
Locks the local task pool.
Definition: scheduler.cpp:456
void advertise_new_work()
If necessary, raise a flag that there is new job in arena.
Definition: arena.h:389
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:89
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void assert_task_valid(const task *)
#define __TBB_control_consistency_helper()
Definition: gcc_generic.h:60
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
void publish_task_pool()
Used by workers to enter the task pool.
Definition: scheduler.cpp:1210
void atomic_fence()
Sequentially consistent full memory fence.
Definition: tbb_machine.h:342
task * my_innermost_running_task
Innermost task whose task::execute() is running. A dummy task on the outermost level.
Definition: scheduler.h:77
bool is_quiescent_local_task_pool_reset() const
Definition: scheduler.h:562
__TBB_atomic size_t head
Index of the first ready task in the deque.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ get_task() [2/2]

task * tbb::internal::generic_scheduler::get_task ( size_t  T)
inline

Get a task from the local pool at specified location T.

Returns the pointer to the task or NULL if the task cannot be executed, e.g. proxy has been deallocated or isolation constraint is not met. tasks_omitted tells if some tasks have been omitted. Called only by the pool owner. The caller should guarantee that the position T is not available for a thief.

Definition at line 922 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::task_proxy::extract_task(), GATHER_STATISTIC, is_local_task_pool_quiescent(), is_proxy(), is_version_3_task(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::no_isolation, tbb::task::note_affinity(), tbb::internal::poison_pointer(), tbb::internal::task_proxy::pool_bit, tbb::task::prefix(), tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line2::task_pool_ptr.

924 {
926  || is_local_task_pool_quiescent(), "Is it safe to get a task at position T?" );
927 
928  task* result = my_arena_slot->task_pool_ptr[T];
929  __TBB_ASSERT( !is_poisoned( result ), "The poisoned task is going to be processed" );
930 #if __TBB_TASK_ISOLATION
931  if ( !result )
932  return NULL;
933 
934  bool omit = isolation != no_isolation && isolation != result->prefix().isolation;
935  if ( !omit && !is_proxy( *result ) )
936  return result;
937  else if ( omit ) {
938  tasks_omitted = true;
939  return NULL;
940  }
941 #else
943  if ( !result || !is_proxy( *result ) )
944  return result;
945 #endif /* __TBB_TASK_ISOLATION */
946 
947  task_proxy& tp = static_cast<task_proxy&>(*result);
948  if ( task *t = tp.extract_task<task_proxy::pool_bit>() ) {
949  GATHER_STATISTIC( ++my_counters.proxies_executed );
950  // Following assertion should be true because TBB 2.0 tasks never specify affinity, and hence are not proxied.
951  __TBB_ASSERT( is_version_3_task( *t ), "backwards compatibility with TBB 2.0 broken" );
953  my_innermost_running_task = t; // prepare for calling note_affinity()
954 #if __TBB_TASK_ISOLATION
955  // Task affinity has changed. Postpone calling note_affinity because the task pool is in invalid state.
956  if ( !tasks_omitted )
957 #endif /* __TBB_TASK_ISOLATION */
958  {
960  t->note_affinity( my_affinity_id );
961  }
962  return t;
963  }
964 
965  // Proxy was empty, so it's our responsibility to free it
966  free_task<small_task>( tp );
967 #if __TBB_TASK_ISOLATION
968  if ( tasks_omitted )
969  my_arena_slot->task_pool_ptr[T] = NULL;
970 #endif /* __TBB_TASK_ISOLATION */
971  return NULL;
972 }
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
#define GATHER_STATISTIC(x)
static bool is_proxy(const task &t)
True if t is a task_proxy.
Definition: scheduler.h:269
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
static bool is_version_3_task(task &t)
Definition: scheduler.h:129
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:89
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
static const intptr_t pool_bit
Definition: mailbox.h:30
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
const isolation_tag no_isolation
Definition: task.h:125
bool is_local_task_pool_quiescent() const
Definition: scheduler.h:551
task * my_innermost_running_task
Innermost task whose task::execute() is running. A dummy task on the outermost level.
Definition: scheduler.h:77
Here is the call graph for this function:

◆ init_stack_info()

void tbb::internal::generic_scheduler::init_stack_info ( )

Sets up the data necessary for the stealing limiting heuristics.

Definition at line 146 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_get_bsp(), __TBB_get_object_ref, tbb::internal::__TBB_load_relaxed(), tbb::spin_mutex::scoped_lock::acquire(), tbb::internal::as_atomic(), tbb::atomic_fence(), tbb::task_group_context::binding_required, tbb::task_group_context::detached, tbb::task_group_context::dying, is_worker(), lock, tbb::internal::MByte, tbb::task_group_context::my_kind, my_market, tbb::internal::context_list_node_t::my_next, my_stealing_threshold, tbb::task_group_context::my_version_and_traits, tbb::relaxed, tbb::release, tbb::internal::spin_wait_until_eq(), and tbb::internal::market::worker_stack_size().

Referenced by create_master(), and create_worker().

146  {
147  // Stacks are growing top-down. Highest address is called "stack base",
148  // and the lowest is "stack limit".
149  __TBB_ASSERT( !my_stealing_threshold, "Stealing threshold has already been calculated" );
150  size_t stack_size = my_market->worker_stack_size();
151 #if USE_WINTHREAD
152 #if defined(_MSC_VER)&&_MSC_VER<1400 && !_WIN64
153  NT_TIB *pteb;
154  __asm mov eax, fs:[0x18]
155  __asm mov pteb, eax
156 #else
157  NT_TIB *pteb = (NT_TIB*)NtCurrentTeb();
158 #endif
159  __TBB_ASSERT( &pteb < pteb->StackBase && &pteb > pteb->StackLimit, "invalid stack info in TEB" );
160  __TBB_ASSERT( stack_size >0, "stack_size not initialized?" );
161  // When a thread is created with the attribute STACK_SIZE_PARAM_IS_A_RESERVATION, stack limit
162  // in the TIB points to the committed part of the stack only. This renders the expression
163  // "(uintptr_t)pteb->StackBase / 2 + (uintptr_t)pteb->StackLimit / 2" virtually useless.
164  // Thus for worker threads we use the explicit stack size we used while creating them.
165  // And for master threads we rely on the following fact and assumption:
166  // - the default stack size of a master thread on Windows is 1M;
167  // - if it was explicitly set by the application it is at least as large as the size of a worker stack.
168  if ( is_worker() || stack_size < MByte )
169  my_stealing_threshold = (uintptr_t)pteb->StackBase - stack_size / 2;
170  else
171  my_stealing_threshold = (uintptr_t)pteb->StackBase - MByte / 2;
172 #else /* USE_PTHREAD */
173  // There is no portable way to get stack base address in Posix, so we use
174  // non-portable method (on all modern Linux) or the simplified approach
175  // based on the common sense assumptions. The most important assumption
176  // is that the main thread's stack size is not less than that of other threads.
177  // See also comment 3 at the end of this file
178  void *stack_base = &stack_size;
179 #if __linux__ && !__bg__
180 #if __TBB_ipf
181  void *rsb_base = __TBB_get_bsp();
182 #endif
183  size_t np_stack_size = 0;
184  void *stack_limit = NULL;
185  pthread_attr_t np_attr_stack;
186  if( 0 == pthread_getattr_np(pthread_self(), &np_attr_stack) ) {
187  if ( 0 == pthread_attr_getstack(&np_attr_stack, &stack_limit, &np_stack_size) ) {
188 #if __TBB_ipf
189  pthread_attr_t attr_stack;
190  if ( 0 == pthread_attr_init(&attr_stack) ) {
191  if ( 0 == pthread_attr_getstacksize(&attr_stack, &stack_size) ) {
192  if ( np_stack_size < stack_size ) {
193  // We are in a secondary thread. Use reliable data.
194  // IA-64 architecture stack is split into RSE backup and memory parts
195  rsb_base = stack_limit;
196  stack_size = np_stack_size/2;
197  // Limit of the memory part of the stack
198  stack_limit = (char*)stack_limit + stack_size;
199  }
200  // We are either in the main thread or this thread stack
201  // is bigger that that of the main one. As we cannot discern
202  // these cases we fall back to the default (heuristic) values.
203  }
204  pthread_attr_destroy(&attr_stack);
205  }
206  // IA-64 architecture stack is split into RSE backup and memory parts
207  my_rsb_stealing_threshold = (uintptr_t)((char*)rsb_base + stack_size/2);
208 #endif /* __TBB_ipf */
209  // Size of the stack free part
210  stack_size = size_t((char*)stack_base - (char*)stack_limit);
211  }
212  pthread_attr_destroy(&np_attr_stack);
213  }
214 #endif /* __linux__ */
215  __TBB_ASSERT( stack_size>0, "stack size must be positive" );
216  my_stealing_threshold = (uintptr_t)((char*)stack_base - stack_size/2);
217 #endif /* USE_PTHREAD */
218 }
market * my_market
The market I am in.
Definition: scheduler.h:155
void * __TBB_get_bsp()
Retrieves the current RSE backing store pointer. IA64 specific.
size_t worker_stack_size() const
Returns the requested stack size of worker threads.
Definition: market.h:294
bool is_worker() const
True if running on a worker thread, false otherwise.
Definition: scheduler.h:591
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
uintptr_t my_stealing_threshold
Position in the call stack specifying its maximal filling when stealing is still allowed.
Definition: scheduler.h:138
const size_t MByte
Definition: tbb_misc.h:40
Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_local_task_pool_quiescent()

bool tbb::internal::generic_scheduler::is_local_task_pool_quiescent ( ) const
inline

Definition at line 551 of file scheduler.h.

References __TBB_ASSERT, EmptyTaskPool, and LockedTaskPool.

Referenced by enqueue(), and get_task().

551  {
553  task** tp = my_arena_slot->task_pool;
554  return tp == EmptyTaskPool || tp == LockedTaskPool;
555 }
#define EmptyTaskPool
Definition: scheduler.h:42
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
#define LockedTaskPool
Definition: scheduler.h:43
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
Here is the caller graph for this function:

◆ is_proxy()

static bool tbb::internal::generic_scheduler::is_proxy ( const task t)
inlinestatic

True if t is a task_proxy.

Definition at line 269 of file scheduler.h.

References __TBB_ISOLATION_ARG, __TBB_ISOLATION_EXPR, tbb::internal::es_task_proxy, and tbb::task::prefix().

Referenced by enqueue(), get_task(), steal_task(), and steal_task_from().

269  {
270  return t.prefix().extra_state==es_task_proxy;
271  }
Tag for v3 task_proxy.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_quiescent_local_task_pool_empty()

bool tbb::internal::generic_scheduler::is_quiescent_local_task_pool_empty ( ) const
inline

Definition at line 557 of file scheduler.h.

References __TBB_ASSERT, and tbb::internal::__TBB_load_relaxed().

Referenced by leave_task_pool().

557  {
558  __TBB_ASSERT( is_local_task_pool_quiescent(), "Task pool is not quiescent" );
560 }
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
bool is_local_task_pool_quiescent() const
Definition: scheduler.h:551
__TBB_atomic size_t head
Index of the first ready task in the deque.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_quiescent_local_task_pool_reset()

bool tbb::internal::generic_scheduler::is_quiescent_local_task_pool_reset ( ) const
inline

Definition at line 562 of file scheduler.h.

References __TBB_ASSERT, and tbb::internal::__TBB_load_relaxed().

Referenced by get_task(), prepare_task_pool(), and tbb::internal::arena::process().

562  {
563  __TBB_ASSERT( is_local_task_pool_quiescent(), "Task pool is not quiescent" );
565 }
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
bool is_local_task_pool_quiescent() const
Definition: scheduler.h:551
__TBB_atomic size_t head
Index of the first ready task in the deque.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_task_pool_published()

bool tbb::internal::generic_scheduler::is_task_pool_published ( ) const
inline

Definition at line 546 of file scheduler.h.

References __TBB_ASSERT, and EmptyTaskPool.

Referenced by acquire_task_pool(), cleanup_master(), enqueue(), get_task(), leave_task_pool(), local_spawn(), prepare_task_pool(), and release_task_pool().

546  {
549 }
#define EmptyTaskPool
Definition: scheduler.h:42
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
Here is the caller graph for this function:

◆ is_version_3_task()

static bool tbb::internal::generic_scheduler::is_version_3_task ( task t)
inlinestatic

Definition at line 129 of file scheduler.h.

References tbb::task::prefix().

Referenced by get_task(), prepare_for_spawning(), and steal_task().

129  {
130 #if __TBB_PREVIEW_CRITICAL_TASKS
131  return (t.prefix().extra_state & 0x7)>=0x1;
132 #else
133  return (t.prefix().extra_state & 0x0F)>=0x1;
134 #endif
135  }
Here is the call graph for this function:
Here is the caller graph for this function:

◆ is_worker()

bool tbb::internal::generic_scheduler::is_worker ( ) const
inline

True if running on a worker thread, false otherwise.

Definition at line 591 of file scheduler.h.

References tbb::internal::scheduler_properties::worker.

Referenced by tbb::internal::market::cleanup(), tbb::interface7::internal::wait_task::execute(), init_stack_info(), nested_arena_entry(), nested_arena_exit(), and tbb::internal::governor::tls_value_of().

591  {
593 }
scheduler_properties my_properties
Definition: scheduler.h:91
bool type
Indicates that a scheduler acts as a master or a worker.
Definition: scheduler.h:50
Here is the caller graph for this function:

◆ leave_task_pool()

void tbb::internal::generic_scheduler::leave_task_pool ( )
inline

Leave the task pool.

Leaving task pool automatically releases the task pool if it is locked.

Definition at line 1222 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_store_relaxed(), EmptyTaskPool, is_quiescent_local_task_pool_empty(), is_task_pool_published(), ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena::my_slots, sync_releasing, and tbb::internal::arena_slot_line1::task_pool.

Referenced by cleanup_master(), and enqueue().

1222  {
1223  __TBB_ASSERT( is_task_pool_published(), "Not in arena" );
1224  // Do not reset my_arena_index. It will be used to (attempt to) re-acquire the slot next time
1225  __TBB_ASSERT( &my_arena->my_slots[my_arena_index] == my_arena_slot, "arena slot and slot index mismatch" );
1226  __TBB_ASSERT ( my_arena_slot->task_pool == LockedTaskPool, "Task pool must be locked when leaving arena" );
1227  __TBB_ASSERT ( is_quiescent_local_task_pool_empty(), "Cannot leave arena when the task pool is not empty" );
1229  // No release fence is necessary here as this assignment precludes external
1230  // accesses to the local task pool when becomes visible. Thus it is harmless
1231  // if it gets hoisted above preceding local bookkeeping manipulations.
1233 }
arena_slot my_slots[1]
Definition: arena.h:296
bool is_quiescent_local_task_pool_empty() const
Definition: scheduler.h:557
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:742
#define EmptyTaskPool
Definition: scheduler.h:42
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
#define LockedTaskPool
Definition: scheduler.h:43
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:68
Here is the call graph for this function:
Here is the caller graph for this function:

◆ local_spawn()

void tbb::internal::generic_scheduler::local_spawn ( task first,
task *&  next 
)

Conceptually, this method should be a member of class scheduler. But doing so would force us to publish class scheduler in the headers.

Definition at line 614 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::arena::advertise_new_work(), assert_task_pool_valid(), commit_spawned_tasks(), tbb::internal::fast_reverse_vector< T, max_segments >::copy_memory(), end, tbb::internal::governor::is_set(), is_task_pool_published(), min_task_pool_size, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::task::prefix(), prepare_for_spawning(), prepare_task_pool(), publish_task_pool(), tbb::internal::fast_reverse_vector< T, max_segments >::push_back(), tbb::internal::fast_reverse_vector< T, max_segments >::size(), tbb::internal::arena_slot_line2::task_pool_ptr, and tbb::internal::arena::work_spawned.

Referenced by local_spawn_root_and_wait(), spawn(), and tbb::task::spawn_and_wait_for_all().

614  {
615  __TBB_ASSERT( first, NULL );
616  __TBB_ASSERT( governor::is_set(this), NULL );
617 #if __TBB_TODO
618  // We need to consider capping the max task pool size and switching
619  // to in-place task execution whenever it is reached.
620 #endif
621  if ( &first->prefix().next == &next ) {
622  // Single task is being spawned
623 #if __TBB_TODO
624  // TODO:
625  // In the future we need to add overloaded spawn method for a single task,
626  // and a method accepting an array of task pointers (we may also want to
627  // change the implementation of the task_list class). But since such changes
628  // may affect the binary compatibility, we postpone them for a while.
629 #endif
630 #if __TBB_PREVIEW_CRITICAL_TASKS
631  if( !handled_as_critical( *first ) )
632 #endif
633  {
634  size_t T = prepare_task_pool( 1 );
636  commit_spawned_tasks( T + 1 );
637  if ( !is_task_pool_published() )
639  }
640  }
641  else {
642  // Task list is being spawned
643 #if __TBB_TODO
644  // TODO: add task_list::front() and implement&document the local execution ordering which is
645  // opposite to the current implementation. The idea is to remove hackish fast_reverse_vector
646  // and use push_back/push_front when accordingly LIFO and FIFO order of local execution is
647  // desired. It also requires refactoring of the reload_tasks method and my_offloaded_tasks list.
648  // Additional benefit may come from adding counter to the task_list so that it can reserve enough
649  // space in the task pool in advance and move all the tasks directly without any intermediate
650  // storages. But it requires dealing with backward compatibility issues and still supporting
651  // counter-less variant (though not necessarily fast implementation).
652 #endif
653  task *arr[min_task_pool_size];
654  fast_reverse_vector<task*> tasks(arr, min_task_pool_size);
655  task *t_next = NULL;
656  for( task* t = first; ; t = t_next ) {
657  // If t is affinitized to another thread, it may already be executed
658  // and destroyed by the time prepare_for_spawning returns.
659  // So milk it while it is alive.
660  bool end = &t->prefix().next == &next;
661  t_next = t->prefix().next;
662 #if __TBB_PREVIEW_CRITICAL_TASKS
663  if( !handled_as_critical( *t ) )
664 #endif
665  tasks.push_back( prepare_for_spawning(t) );
666  if( end )
667  break;
668  }
669  if( size_t num_tasks = tasks.size() ) {
670  size_t T = prepare_task_pool( num_tasks );
671  tasks.copy_memory( my_arena_slot->task_pool_ptr + T );
672  commit_spawned_tasks( T + num_tasks );
673  if ( !is_task_pool_published() )
675  }
676  }
679 }
size_t prepare_task_pool(size_t n)
Makes sure that the task pool can accommodate at least n more elements.
Definition: scheduler.cpp:402
void commit_spawned_tasks(size_t new_tail)
Makes newly spawned tasks visible to thieves.
Definition: scheduler.h:628
task * prepare_for_spawning(task *t)
Checks if t is affinitized to another thread, and if so, bundles it as proxy.
Definition: scheduler.cpp:558
auto first(Container &c) -> decltype(begin(c))
static bool is_set(generic_scheduler *s)
Used to check validity of the local scheduler TLS contents.
Definition: governor.cpp:120
static const size_t min_task_pool_size
Definition: scheduler.h:290
void advertise_new_work()
If necessary, raise a flag that there is new job in arena.
Definition: arena.h:389
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
void publish_task_pool()
Used by workers to enter the task pool.
Definition: scheduler.cpp:1210
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id __itt_string_handle ITT_FORMAT p const __itt_domain __itt_id ITT_FORMAT p const __itt_domain __itt_id __itt_timestamp __itt_timestamp end
Here is the call graph for this function:
Here is the caller graph for this function:

◆ local_spawn_root_and_wait()

void tbb::internal::generic_scheduler::local_spawn_root_and_wait ( task first,
task *&  next 
)

Definition at line 681 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_CONTEXT_ARG, tbb::internal::governor::is_set(), local_spawn(), local_wait_for_all(), tbb::internal::auto_empty_task::prefix(), tbb::task::prefix(), and tbb::internal::task_prefix::ref_count.

Referenced by spawn_root_and_wait().

681  {
682  __TBB_ASSERT( governor::is_set(this), NULL );
683  __TBB_ASSERT( first, NULL );
684  auto_empty_task dummy( __TBB_CONTEXT_ARG(this, first->prefix().context) );
686  for( task* t=first; ; t=t->prefix().next ) {
687  ++n;
688  __TBB_ASSERT( !t->prefix().parent, "not a root task, or already running" );
689  t->prefix().parent = &dummy;
690  if( &t->prefix().next==&next ) break;
691 #if __TBB_TASK_GROUP_CONTEXT
692  __TBB_ASSERT( t->prefix().context == t->prefix().next->prefix().context,
693  "all the root tasks in list must share the same context");
694 #endif /* __TBB_TASK_GROUP_CONTEXT */
695  }
696  dummy.prefix().ref_count = n+1;
697  if( n>1 )
698  local_spawn( first->prefix().next, next );
699  local_wait_for_all( dummy, first );
700 }
virtual void local_wait_for_all(task &parent, task *child)=0
intptr_t reference_count
A reference count.
Definition: task.h:117
auto first(Container &c) -> decltype(begin(c))
static bool is_set(generic_scheduler *s)
Used to check validity of the local scheduler TLS contents.
Definition: governor.cpp:120
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define __TBB_CONTEXT_ARG(arg1, context)
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
void local_spawn(task *first, task *&next)
Definition: scheduler.cpp:614
Here is the call graph for this function:
Here is the caller graph for this function:

◆ local_wait_for_all()

virtual void tbb::internal::generic_scheduler::local_wait_for_all ( task parent,
task child 
)
pure virtual

◆ lock_task_pool()

task ** tbb::internal::generic_scheduler::lock_task_pool ( arena_slot victim_arena_slot) const
inline

Locks victim's task pool, and returns pointer to it. The pointer can be NULL.

Garbles victim_arena_slot->task_pool for the duration of the lock.

ATTENTION: This method is mostly the same as generic_scheduler::acquire_task_pool(), with a little different logic of slot state checks (slot can be empty, locked or point to any task pool other than ours, and asynchronous transitions between all these states are possible). Thus if any of them is changed, consider changing the counterpart as well

Definition at line 500 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_Yield, tbb::internal::as_atomic(), EmptyTaskPool, GATHER_STATISTIC, ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena, tbb::internal::arena_base::my_limit, sync_cancel, and tbb::internal::arena_slot_line1::task_pool.

Referenced by steal_task_from().

500  {
501  task** victim_task_pool;
502  bool sync_prepare_done = false;
503  for( atomic_backoff backoff;; /*backoff pause embedded in the loop*/) {
504  victim_task_pool = victim_arena_slot->task_pool;
505  // NOTE: Do not use comparison of head and tail indices to check for
506  // the presence of work in the victim's task pool, as they may give
507  // incorrect indication because of task pool relocations and resizes.
508  if ( victim_task_pool == EmptyTaskPool ) {
509  // The victim thread emptied its task pool - nothing to lock
510  if( sync_prepare_done )
511  ITT_NOTIFY(sync_cancel, victim_arena_slot);
512  break;
513  }
514  if( victim_task_pool != LockedTaskPool &&
515  as_atomic(victim_arena_slot->task_pool).compare_and_swap(LockedTaskPool, victim_task_pool ) == victim_task_pool )
516  {
517  // We've locked victim's task pool
518  ITT_NOTIFY(sync_acquired, victim_arena_slot);
519  break;
520  }
521  else if( !sync_prepare_done ) {
522  // Start waiting
523  ITT_NOTIFY(sync_prepare, victim_arena_slot);
524  sync_prepare_done = true;
525  }
526  GATHER_STATISTIC( ++my_counters.thieves_conflicts );
527  // Someone else acquired a lock, so pause and do exponential backoff.
528 #if __TBB_STEALING_ABORT_ON_CONTENTION
529  if(!backoff.bounded_pause()) {
530  // the 16 was acquired empirically and a theory behind it supposes
531  // that number of threads becomes much bigger than number of
532  // tasks which can be spawned by one thread causing excessive contention.
533  // TODO: However even small arenas can benefit from the abort on contention
534  // if preemption of a thief is a problem
535  if(my_arena->my_limit >= 16)
536  return EmptyTaskPool;
537  __TBB_Yield();
538  }
539 #else
540  backoff.pause();
541 #endif
542  }
543  __TBB_ASSERT( victim_task_pool == EmptyTaskPool ||
544  (victim_arena_slot->task_pool == LockedTaskPool && victim_task_pool != LockedTaskPool),
545  "not really locked victim's task pool?" );
546  return victim_task_pool;
547 } // generic_scheduler::lock_task_pool
atomic< unsigned > my_limit
The maximal number of currently busy slots.
Definition: arena.h:65
#define EmptyTaskPool
Definition: scheduler.h:42
#define GATHER_STATISTIC(x)
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p sync_cancel
atomic< T > & as_atomic(T &t)
Definition: atomic.h:543
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define __TBB_Yield()
Definition: ibm_aix51.h:44
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
#define LockedTaskPool
Definition: scheduler.h:43
Here is the call graph for this function:
Here is the caller graph for this function:

◆ master_outermost_level()

bool tbb::internal::generic_scheduler::master_outermost_level ( ) const
inline

True if the scheduler is on the outermost dispatch level in a master thread.

Returns true when this scheduler instance is associated with an application thread, and is not executing any TBB task. This includes being in a TBB dispatch loop (one of wait_for_all methods) invoked directly from that thread.

Definition at line 571 of file scheduler.h.

Referenced by tbb::task_scheduler_init::internal_terminate(), and tbb::interface7::internal::task_arena_base::internal_wait().

571  {
572  return !is_worker() && outermost_level();
573 }
bool outermost_level() const
True if the scheduler is on the outermost dispatch level.
Definition: scheduler.h:567
bool is_worker() const
True if running on a worker thread, false otherwise.
Definition: scheduler.h:591
Here is the caller graph for this function:

◆ max_threads_in_arena()

unsigned tbb::internal::generic_scheduler::max_threads_in_arena ( )
inline

Returns the concurrency limit of the current arena.

Definition at line 595 of file scheduler.h.

References __TBB_ASSERT.

Referenced by tbb::internal::get_initial_auto_partitioner_divisor(), and tbb::internal::affinity_partitioner_base_v3::resize().

595  {
596  __TBB_ASSERT(my_arena, NULL);
597  return my_arena->my_num_slots;
598 }
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
unsigned my_num_slots
The number of slots in the arena.
Definition: arena.h:149
Here is the caller graph for this function:

◆ nested_arena_entry()

void tbb::internal::generic_scheduler::nested_arena_entry ( arena a,
size_t  slot_index 
)

Definition at line 675 of file arena.cpp.

References __TBB_ASSERT, tbb::internal::market::adjust_demand(), tbb::internal::governor::assume_scheduler(), attach_arena(), is_worker(), tbb::internal::scheduler_state::my_arena, tbb::internal::arena_base::my_market, and tbb::internal::arena_base::my_num_reserved_slots.

Referenced by tbb::internal::nested_arena_context::nested_arena_context().

675  {
676  __TBB_ASSERT( is_alive(a->my_guard), NULL );
677  __TBB_ASSERT( a!=my_arena, NULL);
678 
679  // overwrite arena settings
680 #if __TBB_TASK_PRIORITY
681  if ( my_offloaded_tasks )
682  my_arena->orphan_offloaded_tasks( *this );
683  my_offloaded_tasks = NULL;
684 #endif /* __TBB_TASK_PRIORITY */
685  attach_arena( a, slot_index, /*is_master*/true );
686  __TBB_ASSERT( my_arena == a, NULL );
688  // TODO? ITT_NOTIFY(sync_acquired, a->my_slots + index);
689  // TODO: it requires market to have P workers (not P-1)
690  // TODO: a preempted worker should be excluded from assignment to other arenas e.g. my_slack--
691  if( !is_worker() && slot_index >= my_arena->my_num_reserved_slots )
693 #if __TBB_ARENA_OBSERVER
694  my_last_local_observer = 0; // TODO: try optimize number of calls
695  my_arena->my_observers.notify_entry_observers( my_last_local_observer, /*worker=*/false );
696 #endif
697 }
void attach_arena(arena *, size_t index, bool is_master)
Definition: arena.cpp:36
unsigned my_num_reserved_slots
The number of reserved slots (can be occupied only by masters).
Definition: arena.h:152
market * my_market
The market that owns this arena.
Definition: arena.h:131
bool is_worker() const
True if running on a worker thread, false otherwise.
Definition: scheduler.h:591
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
static void assume_scheduler(generic_scheduler *s)
Temporarily set TLS slot to the given scheduler.
Definition: governor.cpp:116
void adjust_demand(arena &, int delta)
Request that arena&#39;s need in workers should be adjusted.
Definition: market.cpp:586
Here is the call graph for this function:
Here is the caller graph for this function:

◆ nested_arena_exit()

void tbb::internal::generic_scheduler::nested_arena_exit ( )

Definition at line 699 of file arena.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), tbb::internal::market::adjust_demand(), is_worker(), tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::arena_base::my_exit_monitors, tbb::internal::arena_base::my_market, tbb::internal::arena_base::my_num_reserved_slots, tbb::internal::arena_slot_line1::my_scheduler, tbb::internal::arena::my_slots, and tbb::internal::concurrent_monitor::notify_one().

699  {
700 #if __TBB_ARENA_OBSERVER
701  my_arena->my_observers.notify_exit_observers( my_last_local_observer, /*worker=*/false );
702 #endif /* __TBB_ARENA_OBSERVER */
703 #if __TBB_TASK_PRIORITY
704  if ( my_offloaded_tasks )
705  my_arena->orphan_offloaded_tasks( *this );
706 #endif
709  // Free the master slot.
710  __TBB_ASSERT(my_arena->my_slots[my_arena_index].my_scheduler, "A slot is already empty");
712  my_arena->my_exit_monitors.notify_one(); // do not relax!
713 }
arena_slot my_slots[1]
Definition: arena.h:296
concurrent_monitor my_exit_monitors
Waiting object for master threads that cannot join the arena.
Definition: arena.h:167
generic_scheduler * my_scheduler
Scheduler of the thread attached to the slot.
unsigned my_num_reserved_slots
The number of reserved slots (can be occupied only by masters).
Definition: arena.h:152
void notify_one()
Notify one thread about the event.
market * my_market
The market that owns this arena.
Definition: arena.h:131
bool is_worker() const
True if running on a worker thread, false otherwise.
Definition: scheduler.h:591
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
void adjust_demand(arena &, int delta)
Request that arena&#39;s need in workers should be adjusted.
Definition: market.cpp:586
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:68
Here is the call graph for this function:

◆ outermost_level()

bool tbb::internal::generic_scheduler::outermost_level ( ) const
inline

True if the scheduler is on the outermost dispatch level.

Definition at line 567 of file scheduler.h.

Referenced by tbb::interface7::internal::delegated_task::execute(), and tbb::interface7::internal::wait_task::execute().

567  {
568  return my_properties.outermost;
569 }
bool outermost
Indicates that a scheduler is on outermost level.
Definition: scheduler.h:53
scheduler_properties my_properties
Definition: scheduler.h:91
Here is the caller graph for this function:

◆ plugged_return_list()

static task* tbb::internal::generic_scheduler::plugged_return_list ( )
inlinestatic

Special value used to mark my_return_list as not taking any more entries.

Definition at line 376 of file scheduler.h.

Referenced by free_nonlocal_small_task(), and free_scheduler().

376 {return (task*)(intptr_t)(-1);}
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
Here is the caller graph for this function:

◆ prepare_for_spawning()

task * tbb::internal::generic_scheduler::prepare_for_spawning ( task t)
inline

Checks if t is affinitized to another thread, and if so, bundles it as proxy.

Returns either t or proxy containing t.

Definition at line 558 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_CONTEXT_ARG, __TBB_ISOLATION_EXPR, allocate_task(), tbb::task::allocated, tbb::task::context(), tbb::internal::es_ref_count_active, tbb::internal::es_task_proxy, tbb::internal::is_critical(), is_version_3_task(), ITT_NOTIFY, tbb::internal::task_proxy::location_mask, tbb::internal::arena::mailbox(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::task_proxy::outbox, parent, tbb::task::parent(), tbb::internal::poison_pointer(), tbb::task::prefix(), tbb::internal::mail_outbox::push(), tbb::task::ready, tbb::task::state(), sync_releasing, and tbb::internal::task_proxy::task_and_tag.

Referenced by local_spawn().

558  {
559  __TBB_ASSERT( t->state()==task::allocated, "attempt to spawn task that is not in 'allocated' state" );
560  t->prefix().state = task::ready;
561 #if TBB_USE_ASSERT
562  if( task* parent = t->parent() ) {
563  internal::reference_count ref_count = parent->prefix().ref_count;
564  __TBB_ASSERT( ref_count>=0, "attempt to spawn task whose parent has a ref_count<0" );
565  __TBB_ASSERT( ref_count!=0, "attempt to spawn task whose parent has a ref_count==0 (forgot to set_ref_count?)" );
566  parent->prefix().extra_state |= es_ref_count_active;
567  }
568 #endif /* TBB_USE_ASSERT */
569  affinity_id dst_thread = t->prefix().affinity;
570  __TBB_ASSERT( dst_thread == 0 || is_version_3_task(*t),
571  "backwards compatibility to TBB 2.0 tasks is broken" );
572 #if __TBB_TASK_ISOLATION
573  isolation_tag isolation = my_innermost_running_task->prefix().isolation;
574  t->prefix().isolation = isolation;
575 #endif /* __TBB_TASK_ISOLATION */
576  if( dst_thread != 0 && dst_thread != my_affinity_id ) {
577  task_proxy& proxy = (task_proxy&)allocate_task( sizeof(task_proxy),
578  __TBB_CONTEXT_ARG(NULL, NULL) );
579  // Mark as a proxy
580  proxy.prefix().extra_state = es_task_proxy;
581  proxy.outbox = &my_arena->mailbox(dst_thread);
582  // Mark proxy as present in both locations (sender's task pool and destination mailbox)
583  proxy.task_and_tag = intptr_t(t) | task_proxy::location_mask;
584 #if __TBB_TASK_PRIORITY
585  poison_pointer( proxy.prefix().context );
586 #endif /* __TBB_TASK_PRIORITY */
587  __TBB_ISOLATION_EXPR( proxy.prefix().isolation = isolation );
588  ITT_NOTIFY( sync_releasing, proxy.outbox );
589  // Mail the proxy - after this point t may be destroyed by another thread at any moment.
590  proxy.outbox->push(&proxy);
591  return &proxy;
592  }
593  return t;
594 }
static const intptr_t location_mask
Definition: mailbox.h:32
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:946
#define __TBB_ISOLATION_EXPR(isolation)
intptr_t reference_count
A reference count.
Definition: task.h:117
task object is freshly allocated or recycled.
Definition: task.h:617
mail_outbox & mailbox(affinity_id id)
Get reference to mailbox corresponding to given affinity_id.
Definition: arena.h:204
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
static bool is_version_3_task(task &t)
Definition: scheduler.h:129
unsigned short affinity_id
An id as used for specifying affinity.
Definition: task.h:120
task & allocate_task(size_t number_of_bytes, __TBB_CONTEXT_ARG(task *parent, task_group_context *context))
Allocate task object, either from the heap or a free list.
Definition: scheduler.cpp:300
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:89
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t int ITT_FORMAT d no args no args unsigned int ITT_FORMAT u const __itt_domain __itt_id ITT_FORMAT lu const __itt_domain __itt_id __itt_id parent
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define __TBB_CONTEXT_ARG(arg1, context)
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
task is in ready pool, or is going to be put there, or was just taken off.
Definition: task.h:615
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
Set if ref_count might be changed by another thread. Used for debugging.
intptr_t isolation_tag
A tag for task isolation.
Definition: task.h:124
task * my_innermost_running_task
Innermost task whose task::execute() is running. A dummy task on the outermost level.
Definition: scheduler.h:77
Tag for v3 task_proxy.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ prepare_task_pool()

size_t tbb::internal::generic_scheduler::prepare_task_pool ( size_t  n)
inline

Makes sure that the task pool can accommodate at least n more elements.

If necessary relocates existing task pointers or grows the ready task deque. Returns (possible updated) tail index (not accounting for n).

Definition at line 402 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), acquire_task_pool(), tbb::internal::arena_slot::allocate_task_pool(), assert_task_pool_valid(), commit_relocated_tasks(), tbb::internal::arena_slot::fill_with_canary_pattern(), tbb::internal::arena_slot_line1::head, is_quiescent_local_task_pool_reset(), is_task_pool_published(), min_task_pool_size, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena_slot_line2::my_task_pool_size, new_size, tbb::internal::NFS_Free(), tbb::internal::arena_slot_line2::tail, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by enqueue(), and local_spawn().

402  {
403  size_t T = __TBB_load_relaxed(my_arena_slot->tail); // mirror
404  if ( T + num_tasks <= my_arena_slot->my_task_pool_size )
405  return T;
406 
407  size_t new_size = num_tasks;
408 
412  if ( num_tasks < min_task_pool_size ) new_size = min_task_pool_size;
413  my_arena_slot->allocate_task_pool( new_size );
414  return 0;
415  }
416 
418  size_t H = __TBB_load_relaxed( my_arena_slot->head ); // mirror
419  task** task_pool = my_arena_slot->task_pool_ptr;;
421  // Count not skipped tasks. Consider using std::count_if.
422  for ( size_t i = H; i < T; ++i )
423  if ( task_pool[i] ) ++new_size;
424  // If the free space at the beginning of the task pool is too short, we
425  // are likely facing a pathological single-producer-multiple-consumers
426  // scenario, and thus it's better to expand the task pool
427  bool allocate = new_size > my_arena_slot->my_task_pool_size - min_task_pool_size/4;
428  if ( allocate ) {
429  // Grow task pool. As this operation is rare, and its cost is asymptotically
430  // amortizable, we can tolerate new task pool allocation done under the lock.
431  if ( new_size < 2 * my_arena_slot->my_task_pool_size )
432  new_size = 2 * my_arena_slot->my_task_pool_size;
433  my_arena_slot->allocate_task_pool( new_size ); // updates my_task_pool_size
434  }
435  // Filter out skipped tasks. Consider using std::copy_if.
436  size_t T1 = 0;
437  for ( size_t i = H; i < T; ++i )
438  if ( task_pool[i] )
439  my_arena_slot->task_pool_ptr[T1++] = task_pool[i];
440  // Deallocate the previous task pool if a new one has been allocated.
441  if ( allocate )
442  NFS_Free( task_pool );
443  else
445  // Publish the new state.
448  return T1;
449 }
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task __itt_model_task_instance ITT_FORMAT p void ITT_FORMAT p void ITT_FORMAT p void size_t ITT_FORMAT d void ITT_FORMAT p const wchar_t ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s const char ITT_FORMAT s no args void ITT_FORMAT p size_t ITT_FORMAT d no args const wchar_t const wchar_t ITT_FORMAT s __itt_heap_function void size_t int ITT_FORMAT d __itt_heap_function void ITT_FORMAT p __itt_heap_function void void size_t new_size
void fill_with_canary_pattern(size_t, size_t)
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
void acquire_task_pool() const
Locks the local task pool.
Definition: scheduler.cpp:456
static const size_t min_task_pool_size
Definition: scheduler.h:290
void __TBB_EXPORTED_FUNC NFS_Free(void *)
Free memory allocated by NFS_Allocate.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
size_t my_task_pool_size
Capacity of the primary task pool (number of elements - pointers to task).
void allocate_task_pool(size_t n)
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
void commit_relocated_tasks(size_t new_tail)
Makes relocated tasks visible to thieves and releases the local task pool.
Definition: scheduler.h:637
bool is_quiescent_local_task_pool_reset() const
Definition: scheduler.h:562
__TBB_atomic size_t head
Index of the first ready task in the deque.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ publish_task_pool()

void tbb::internal::generic_scheduler::publish_task_pool ( )
inline

Used by workers to enter the task pool.

Does not lock the task pool in case if arena slot has been successfully grabbed.

Definition at line 1210 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_with_release(), EmptyTaskPool, tbb::internal::arena_slot_line1::head, ITT_NOTIFY, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::arena::my_slots, sync_releasing, tbb::internal::arena_slot_line2::tail, tbb::internal::arena_slot_line1::task_pool, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by enqueue(), get_task(), and local_spawn().

1210  {
1211  __TBB_ASSERT ( my_arena, "no arena: initialization not completed?" );
1212  __TBB_ASSERT ( my_arena_index < my_arena->my_num_slots, "arena slot index is out-of-bound" );
1214  __TBB_ASSERT ( my_arena_slot->task_pool == EmptyTaskPool, "someone else grabbed my arena slot?" );
1216  "entering arena without tasks to share" );
1217  // Release signal on behalf of previously spawned tasks (when this thread was not in arena yet)
1220 }
arena_slot my_slots[1]
Definition: arena.h:296
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
#define EmptyTaskPool
Definition: scheduler.h:42
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:68
__TBB_atomic size_t head
Index of the first ready task in the deque.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ receive_or_steal_task()

virtual task* tbb::internal::generic_scheduler::receive_or_steal_task ( __TBB_ISOLATION_ARG(__TBB_atomic reference_count &completion_ref_count, isolation_tag isolation)  )
pure virtual

Try getting a task from other threads (via mailbox, stealing, FIFO queue, orphans adoption).

Returns obtained task or NULL if all attempts fail.

Implemented in tbb::internal::custom_scheduler< SchedulerTraits >.

Referenced by tbb::internal::arena::process().

Here is the caller graph for this function:

◆ release_task_pool()

void tbb::internal::generic_scheduler::release_task_pool ( ) const
inline

Unlocks the local task pool.

Restores my_arena_slot->task_pool munged by acquire_task_pool. Requires correctly set my_arena_slot->task_pool_ptr.

Definition at line 485 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), is_task_pool_published(), ITT_NOTIFY, LockedTaskPool, tbb::internal::scheduler_state::my_arena_slot, sync_releasing, tbb::internal::arena_slot_line1::task_pool, and tbb::internal::arena_slot_line2::task_pool_ptr.

Referenced by cleanup_master(), enqueue(), generic_scheduler(), and get_task().

485  {
486  if ( !is_task_pool_published() )
487  return; // we are not in arena - nothing to unlock
488  __TBB_ASSERT( my_arena_slot, "we are not in arena" );
489  __TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "arena slot is not locked" );
492 }
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
#define LockedTaskPool
Definition: scheduler.h:43
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
Here is the call graph for this function:
Here is the caller graph for this function:

◆ reset_task_pool_and_leave()

void tbb::internal::generic_scheduler::reset_task_pool_and_leave ( )
inline

Resets head and tail indices to 0, and leaves task pool.

The task pool must be locked by the owner (via acquire_task_pool).

Definition at line 620 of file scheduler.h.

References __TBB_ASSERT, tbb::internal::__TBB_store_relaxed(), and LockedTaskPool.

Referenced by get_task().

620  {
621  __TBB_ASSERT( my_arena_slot->task_pool == LockedTaskPool, "Task pool must be locked when resetting task pool" );
624  leave_task_pool();
625 }
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:742
__TBB_atomic size_t tail
Index of the element following the last ready task in the deque.
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void leave_task_pool()
Leave the task pool.
Definition: scheduler.cpp:1222
#define LockedTaskPool
Definition: scheduler.h:43
arena_slot * my_arena_slot
Pointer to the slot in the arena we own at the moment.
Definition: scheduler.h:71
__TBB_atomic size_t head
Index of the first ready task in the deque.
Here is the call graph for this function:
Here is the caller graph for this function:

◆ spawn()

void tbb::internal::generic_scheduler::spawn ( task first,
task *&  next 
)
virtual

For internal use only.

Implements tbb::internal::scheduler.

Definition at line 702 of file scheduler.cpp.

References tbb::internal::governor::local_scheduler(), and local_spawn().

702  {
704 }
auto first(Container &c) -> decltype(begin(c))
static generic_scheduler * local_scheduler()
Obtain the thread-local instance of the TBB scheduler.
Definition: governor.h:122
void local_spawn(task *first, task *&next)
Definition: scheduler.cpp:614
Here is the call graph for this function:

◆ spawn_root_and_wait()

void tbb::internal::generic_scheduler::spawn_root_and_wait ( task first,
task *&  next 
)
virtual

For internal use only.

Implements tbb::internal::scheduler.

Definition at line 706 of file scheduler.cpp.

References tbb::internal::governor::local_scheduler(), and local_spawn_root_and_wait().

706  {
708 }
void local_spawn_root_and_wait(task *first, task *&next)
Definition: scheduler.cpp:681
auto first(Container &c) -> decltype(begin(c))
static generic_scheduler * local_scheduler()
Obtain the thread-local instance of the TBB scheduler.
Definition: governor.h:122
Here is the call graph for this function:

◆ steal_task()

task * tbb::internal::generic_scheduler::steal_task ( __TBB_ISOLATION_EXPR(isolation_tag isolation)  )

Attempts to steal a task from a randomly chosen thread/scheduler.

Definition at line 1071 of file scheduler.cpp.

References __TBB_ISOLATION_ARG, EmptyTaskPool, tbb::internal::es_task_is_stolen, tbb::internal::task_proxy::extract_task(), GATHER_STATISTIC, tbb::internal::FastRandom::get(), is_proxy(), is_version_3_task(), tbb::internal::scheduler_state::my_affinity_id, tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_index, tbb::internal::scheduler_state::my_innermost_running_task, tbb::internal::arena_base::my_limit, my_random, tbb::internal::arena::my_slots, tbb::task::note_affinity(), tbb::internal::task_proxy::pool_bit, tbb::task::prefix(), steal_task_from(), and tbb::internal::arena_slot_line1::task_pool.

1071  {
1072  // Try to steal a task from a random victim.
1073  size_t k = my_random.get() % (my_arena->my_limit-1);
1074  arena_slot* victim = &my_arena->my_slots[k];
1075  // The following condition excludes the master that might have
1076  // already taken our previous place in the arena from the list .
1077  // of potential victims. But since such a situation can take
1078  // place only in case of significant oversubscription, keeping
1079  // the checks simple seems to be preferable to complicating the code.
1080  if( k >= my_arena_index )
1081  ++victim; // Adjusts random distribution to exclude self
1082  task **pool = victim->task_pool;
1083  task *t = NULL;
1084  if( pool == EmptyTaskPool || !(t = steal_task_from( __TBB_ISOLATION_ARG(*victim, isolation) )) )
1085  return NULL;
1086  if( is_proxy(*t) ) {
1087  task_proxy &tp = *(task_proxy*)t;
1088  t = tp.extract_task<task_proxy::pool_bit>();
1089  if ( !t ) {
1090  // Proxy was empty, so it's our responsibility to free it
1091  free_task<no_cache_small_task>(tp);
1092  return NULL;
1093  }
1094  GATHER_STATISTIC( ++my_counters.proxies_stolen );
1095  }
1096  t->prefix().extra_state |= es_task_is_stolen;
1097  if( is_version_3_task(*t) ) {
1099  t->prefix().owner = this;
1100  t->note_affinity( my_affinity_id );
1101  }
1102  GATHER_STATISTIC( ++my_counters.steals_committed );
1103  return t;
1104 }
arena_slot my_slots[1]
Definition: arena.h:296
atomic< unsigned > my_limit
The maximal number of currently busy slots.
Definition: arena.h:65
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:946
#define EmptyTaskPool
Definition: scheduler.h:42
#define GATHER_STATISTIC(x)
static bool is_proxy(const task &t)
True if t is a task_proxy.
Definition: scheduler.h:269
static bool is_version_3_task(task &t)
Definition: scheduler.h:129
affinity_id my_affinity_id
The mailbox id assigned to this scheduler.
Definition: scheduler.h:89
task * steal_task_from(__TBB_ISOLATION_ARG(arena_slot &victim_arena_slot, isolation_tag isolation))
Steal task from another scheduler&#39;s ready pool.
Definition: scheduler.cpp:1106
FastRandom my_random
Random number generator used for picking a random victim from which to steal.
Definition: scheduler.h:158
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
static const intptr_t pool_bit
Definition: mailbox.h:30
#define __TBB_ISOLATION_ARG(arg1, isolation)
size_t my_arena_index
Index of the arena slot the scheduler occupies now, or occupied last time.
Definition: scheduler.h:68
task * my_innermost_running_task
Innermost task whose task::execute() is running. A dummy task on the outermost level.
Definition: scheduler.h:77
unsigned short get()
Get a random number.
Definition: tbb_misc.h:139
Set if the task has been stolen.
Here is the call graph for this function:

◆ steal_task_from()

task * tbb::internal::generic_scheduler::steal_task_from ( __TBB_ISOLATION_ARG(arena_slot &victim_arena_slot, isolation_tag isolation)  )

Steal task from another scheduler's ready pool.

Definition at line 1106 of file scheduler.cpp.

References __TBB_ASSERT, __TBB_cl_evict, __TBB_control_consistency_helper, __TBB_ISOLATION_EXPR, tbb::internal::__TBB_load_relaxed(), tbb::internal::__TBB_store_relaxed(), tbb::internal::arena::advertise_new_work(), tbb::atomic_fence(), GATHER_STATISTIC, tbb::internal::arena_slot_line1::head, is_proxy(), tbb::internal::task_proxy::is_shared(), ITT_NOTIFY, lock_task_pool(), tbb::internal::scheduler_state::my_arena, tbb::internal::scheduler_state::my_arena_slot, tbb::internal::scheduler_state::my_properties, tbb::internal::no_isolation, tbb::internal::task_proxy::outbox, tbb::internal::poison_pointer(), tbb::task::prefix(), tbb::internal::mail_outbox::recipient_is_idle(), tbb::internal::arena_slot_line2::tail, tbb::internal::task_proxy::task_and_tag, unlock_task_pool(), and tbb::internal::arena::wakeup.

Referenced by steal_task().

1106  {
1107  task** victim_pool = lock_task_pool( &victim_slot );
1108  if ( !victim_pool )
1109  return NULL;
1110  task* result = NULL;
1111  size_t H = __TBB_load_relaxed(victim_slot.head); // mirror
1112  size_t H0 = H;
1113  bool tasks_omitted = false;
1114  do {
1115  __TBB_store_relaxed( victim_slot.head, ++H );
1116  atomic_fence();
1117  if ( (intptr_t)H > (intptr_t)__TBB_load_relaxed( victim_slot.tail ) ) {
1118  // Stealing attempt failed, deque contents has not been changed by us
1119  GATHER_STATISTIC( ++my_counters.thief_backoffs );
1120  __TBB_store_relaxed( victim_slot.head, /*dead: H = */ H0 );
1121  __TBB_ASSERT( !result, NULL );
1122  goto unlock;
1123  }
1124  __TBB_control_consistency_helper(); // on victim_slot.tail
1125  result = victim_pool[H-1];
1126  __TBB_ASSERT( !is_poisoned( result ), NULL );
1127 
1128  if ( result ) {
1129  __TBB_ISOLATION_EXPR( if ( isolation == no_isolation || isolation == result->prefix().isolation ) )
1130  {
1131  if ( !is_proxy( *result ) )
1132  break;
1133  task_proxy& tp = *static_cast<task_proxy*>(result);
1134  // If mailed task is likely to be grabbed by its destination thread, skip it.
1135  if ( !(task_proxy::is_shared( tp.task_and_tag ) && tp.outbox->recipient_is_idle()) )
1136  break;
1137  GATHER_STATISTIC( ++my_counters.proxies_bypassed );
1138  }
1139  // The task cannot be executed either due to isolation or proxy constraints.
1140  result = NULL;
1141  tasks_omitted = true;
1142  } else if ( !tasks_omitted ) {
1143  // Cleanup the task pool from holes until a task is skipped.
1144  __TBB_ASSERT( H0 == H-1, NULL );
1145  poison_pointer( victim_pool[H0] );
1146  H0 = H;
1147  }
1148  } while ( !result );
1149  __TBB_ASSERT( result, NULL );
1150 
1151  // emit "task was consumed" signal
1152  ITT_NOTIFY( sync_acquired, (void*)((uintptr_t)&victim_slot+sizeof( uintptr_t )) );
1153  poison_pointer( victim_pool[H-1] );
1154  if ( tasks_omitted ) {
1155  // Some proxies in the task pool have been omitted. Set the stolen task to NULL.
1156  victim_pool[H-1] = NULL;
1157  __TBB_store_relaxed( victim_slot.head, /*dead: H = */ H0 );
1158  }
1159 unlock:
1160  unlock_task_pool( &victim_slot, victim_pool );
1161 #if __TBB_PREFETCHING
1162  __TBB_cl_evict(&victim_slot.head);
1163  __TBB_cl_evict(&victim_slot.tail);
1164 #endif
1165  if ( tasks_omitted )
1166  // Synchronize with snapshot as the head and tail can be bumped which can falsely trigger EMPTY state
1168  return result;
1169 }
T __TBB_load_relaxed(const volatile T &location)
Definition: tbb_machine.h:738
void unlock_task_pool(arena_slot *victim_arena_slot, task **victim_task_pool) const
Unlocks victim&#39;s task pool.
Definition: scheduler.cpp:549
void __TBB_store_relaxed(volatile T &location, V value)
Definition: tbb_machine.h:742
#define __TBB_ISOLATION_EXPR(isolation)
static bool is_shared(intptr_t tat)
True if the proxy is stored both in its sender&#39;s pool and in the destination mailbox.
Definition: mailbox.h:46
#define GATHER_STATISTIC(x)
static bool is_proxy(const task &t)
True if t is a task_proxy.
Definition: scheduler.h:269
void poison_pointer(T *__TBB_atomic &)
Definition: tbb_stddef.h:305
void advertise_new_work()
If necessary, raise a flag that there is new job in arena.
Definition: arena.h:389
task ** lock_task_pool(arena_slot *victim_arena_slot) const
Locks victim&#39;s task pool, and returns pointer to it. The pointer can be NULL.
Definition: scheduler.cpp:500
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
#define __TBB_control_consistency_helper()
Definition: gcc_generic.h:60
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
#define __TBB_cl_evict(p)
Definition: mic_common.h:34
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d void ITT_FORMAT p void ITT_FORMAT p __itt_model_site __itt_model_site_instance ITT_FORMAT p __itt_model_task * task
const isolation_tag no_isolation
Definition: task.h:125
void atomic_fence()
Sequentially consistent full memory fence.
Definition: tbb_machine.h:342
Here is the call graph for this function:
Here is the caller graph for this function:

◆ unlock_task_pool()

void tbb::internal::generic_scheduler::unlock_task_pool ( arena_slot victim_arena_slot,
task **  victim_task_pool 
) const
inline

Unlocks victim's task pool.

Restores victim_arena_slot->task_pool munged by lock_task_pool.

Definition at line 549 of file scheduler.cpp.

References __TBB_ASSERT, tbb::internal::__TBB_store_with_release(), ITT_NOTIFY, LockedTaskPool, sync_releasing, and tbb::internal::arena_slot_line1::task_pool.

Referenced by steal_task_from().

550  {
551  __TBB_ASSERT( victim_arena_slot, "empty victim arena slot pointer" );
552  __TBB_ASSERT( victim_arena_slot->task_pool == LockedTaskPool, "victim arena slot is not locked" );
553  ITT_NOTIFY(sync_releasing, victim_arena_slot);
554  __TBB_store_with_release( victim_arena_slot->task_pool, victim_task_pool );
555 }
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p sync_releasing
void __TBB_store_with_release(volatile T &location, V value)
Definition: tbb_machine.h:716
#define ITT_NOTIFY(name, obj)
Definition: itt_notify.h:116
#define LockedTaskPool
Definition: scheduler.h:43
Here is the call graph for this function:
Here is the caller graph for this function:

◆ wait_until_empty()

void tbb::internal::generic_scheduler::wait_until_empty ( )

Definition at line 715 of file arena.cpp.

References local_wait_for_all(), tbb::internal::scheduler_state::my_arena, my_dummy_task, tbb::internal::arena_base::my_pool_state, tbb::task::prefix(), and tbb::internal::arena::SNAPSHOT_EMPTY.

Referenced by tbb::interface7::internal::task_arena_base::internal_wait().

715  {
716  my_dummy_task->prefix().ref_count++; // prevents exit from local_wait_for_all when local work is done enforcing the stealing
719  my_dummy_task->prefix().ref_count--;
720 }
virtual void local_wait_for_all(task &parent, task *child)=0
task * my_dummy_task
Fake root task created by slave threads.
Definition: scheduler.h:169
internal::task_prefix & prefix(internal::version_tag *=NULL) const
Get reference to corresponding task_prefix.
Definition: task.h:946
static const pool_state_t SNAPSHOT_EMPTY
No tasks to steal since last snapshot was taken.
Definition: arena.h:217
arena * my_arena
The arena that I own (if master) or am servicing at the moment (if worker)
Definition: scheduler.h:74
tbb::atomic< uintptr_t > my_pool_state
Current task pool state and estimate of available tasks amount.
Definition: arena.h:99
Here is the call graph for this function:
Here is the caller graph for this function:

◆ worker_outermost_level()

bool tbb::internal::generic_scheduler::worker_outermost_level ( ) const
inline

True if the scheduler is on the outermost dispatch level in a worker thread.

Definition at line 575 of file scheduler.h.

References tbb::task::prefix().

Referenced by tbb::internal::arena::free_arena(), and tbb::internal::arena::process().

575  {
576  return is_worker() && outermost_level();
577 }
bool outermost_level() const
True if the scheduler is on the outermost dispatch level.
Definition: scheduler.h:567
bool is_worker() const
True if running on a worker thread, false otherwise.
Definition: scheduler.h:591
Here is the call graph for this function:
Here is the caller graph for this function:

Friends And Related Function Documentation

◆ custom_scheduler

template<typename SchedulerTraits >
friend class custom_scheduler
friend

Definition at line 310 of file scheduler.h.

Member Data Documentation

◆ min_task_pool_size

const size_t tbb::internal::generic_scheduler::min_task_pool_size = 64
static

Initial size of the task deque sufficient to serve without reallocation 4 nested parallel_for calls with iteration space of 65535 grains each.

Definition at line 290 of file scheduler.h.

Referenced by enqueue(), generic_scheduler(), local_spawn(), and prepare_task_pool().

◆ my_auto_initialized

bool tbb::internal::generic_scheduler::my_auto_initialized

True if *this was created by automatic TBB initialization.

Definition at line 180 of file scheduler.h.

Referenced by tbb::internal::governor::auto_terminate(), tbb::internal::governor::init_scheduler(), and tbb::internal::governor::init_scheduler_weak().

◆ my_dummy_task

task* tbb::internal::generic_scheduler::my_dummy_task

◆ my_free_list

task* tbb::internal::generic_scheduler::my_free_list

Free list of small tasks that can be reused.

Definition at line 161 of file scheduler.h.

Referenced by allocate_task(), and free_scheduler().

◆ my_market

◆ my_random

FastRandom tbb::internal::generic_scheduler::my_random

◆ my_ref_count

long tbb::internal::generic_scheduler::my_ref_count

Reference count for scheduler.

Number of task_scheduler_init objects that point to this scheduler

Definition at line 173 of file scheduler.h.

Referenced by tbb::internal::governor::auto_terminate(), tbb::internal::governor::init_scheduler(), and tbb::internal::governor::terminate_scheduler().

◆ my_return_list

task* tbb::internal::generic_scheduler::my_return_list

List of small tasks that have been returned to this scheduler by other schedulers.

Definition at line 383 of file scheduler.h.

Referenced by allocate_task(), free_scheduler(), and generic_scheduler().

◆ my_small_task_count

__TBB_atomic intptr_t tbb::internal::generic_scheduler::my_small_task_count

Number of small tasks that have been allocated by this scheduler.

Definition at line 379 of file scheduler.h.

Referenced by allocate_task(), and free_scheduler().

◆ my_stealing_threshold

uintptr_t tbb::internal::generic_scheduler::my_stealing_threshold

Position in the call stack specifying its maximal filling when stealing is still allowed.

Definition at line 138 of file scheduler.h.

Referenced by init_stack_info().

◆ null_arena_index

const size_t tbb::internal::generic_scheduler::null_arena_index = ~size_t(0)
static

Definition at line 144 of file scheduler.h.

◆ quick_task_size

const size_t tbb::internal::generic_scheduler::quick_task_size = 256-task_prefix_reservation_size
static

If sizeof(task) is <=quick_task_size, it is handled on a free list instead of malloc'd.

Definition at line 127 of file scheduler.h.

Referenced by allocate_task().


The documentation for this class was generated from the following files:

Copyright © 2005-2019 Intel Corporation. All Rights Reserved.

Intel, Pentium, Intel Xeon, Itanium, Intel XScale and VTune are registered trademarks or trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

* Other names and brands may be claimed as the property of others.