Drizzled Public API Documentation

sync0sync.cc
1 /*****************************************************************************
2 
3 Copyright (c) 1995, 2011, Innobase Oy. All Rights Reserved.
4 Copyright (C) 2008, Google Inc.
5 
6 Portions of this file contain modifications contributed and copyrighted by
7 Google, Inc. Those modifications are gratefully acknowledged and are described
8 briefly in the InnoDB documentation. The contributions by Google are
9 incorporated with their permission, and subject to the conditions contained in
10 the file COPYING.Google.
11 
12 This program is free software; you can redistribute it and/or modify it under
13 the terms of the GNU General Public License as published by the Free Software
14 Foundation; version 2 of the License.
15 
16 This program is distributed in the hope that it will be useful, but WITHOUT
17 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
18 FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
19 
20 You should have received a copy of the GNU General Public License along with
21 this program; if not, write to the Free Software Foundation, Inc., 51 Franklin
22 St, Fifth Floor, Boston, MA 02110-1301 USA
23 
24 *****************************************************************************/
25 
26 /**************************************************/
33 #include "sync0sync.h"
34 #ifdef UNIV_NONINL
35 #include "sync0sync.ic"
36 #endif
37 
38 #include "sync0rw.h"
39 #include "buf0buf.h"
40 #include "srv0srv.h"
41 #include "buf0types.h"
42 #include "os0sync.h" /* for HAVE_ATOMIC_BUILTINS */
43 #ifdef UNIV_SYNC_DEBUG
44 # include "srv0start.h" /* srv_is_being_started */
45 #endif /* UNIV_SYNC_DEBUG */
46 #include "ha_prototypes.h"
47 
48 /*
49  REASONS FOR IMPLEMENTING THE SPIN LOCK MUTEX
50  ============================================
51 
52 Semaphore operations in operating systems are slow: Solaris on a 1993 Sparc
53 takes 3 microseconds (us) for a lock-unlock pair and Windows NT on a 1995
54 Pentium takes 20 microseconds for a lock-unlock pair. Therefore, we have to
55 implement our own efficient spin lock mutex. Future operating systems may
56 provide efficient spin locks, but we cannot count on that.
57 
58 Another reason for implementing a spin lock is that on multiprocessor systems
59 it can be more efficient for a processor to run a loop waiting for the
60 semaphore to be released than to switch to a different thread. A thread switch
61 takes 25 us on both platforms mentioned above. See Gray and Reuter's book
62 Transaction processing for background.
63 
64 How long should the spin loop last before suspending the thread? On a
65 uniprocessor, spinning does not help at all, because if the thread owning the
66 mutex is not executing, it cannot be released. Spinning actually wastes
67 resources.
68 
69 On a multiprocessor, we do not know if the thread owning the mutex is
70 executing or not. Thus it would make sense to spin as long as the operation
71 guarded by the mutex would typically last assuming that the thread is
72 executing. If the mutex is not released by that time, we may assume that the
73 thread owning the mutex is not executing and suspend the waiting thread.
74 
75 A typical operation (where no i/o involved) guarded by a mutex or a read-write
76 lock may last 1 - 20 us on the current Pentium platform. The longest
77 operations are the binary searches on an index node.
78 
79 We conclude that the best choice is to set the spin time at 20 us. Then the
80 system should work well on a multiprocessor. On a uniprocessor we have to
81 make sure that thread swithches due to mutex collisions are not frequent,
82 i.e., they do not happen every 100 us or so, because that wastes too much
83 resources. If the thread switches are not frequent, the 20 us wasted in spin
84 loop is not too much.
85 
86 Empirical studies on the effect of spin time should be done for different
87 platforms.
88 
89 
90  IMPLEMENTATION OF THE MUTEX
91  ===========================
92 
93 For background, see Curt Schimmel's book on Unix implementation on modern
94 architectures. The key points in the implementation are atomicity and
95 serialization of memory accesses. The test-and-set instruction (XCHG in
96 Pentium) must be atomic. As new processors may have weak memory models, also
97 serialization of memory references may be necessary. The successor of Pentium,
98 P6, has at least one mode where the memory model is weak. As far as we know,
99 in Pentium all memory accesses are serialized in the program order and we do
100 not have to worry about the memory model. On other processors there are
101 special machine instructions called a fence, memory barrier, or storage
102 barrier (STBAR in Sparc), which can be used to serialize the memory accesses
103 to happen in program order relative to the fence instruction.
104 
105 Leslie Lamport has devised a "bakery algorithm" to implement a mutex without
106 the atomic test-and-set, but his algorithm should be modified for weak memory
107 models. We do not use Lamport's algorithm, because we guess it is slower than
108 the atomic test-and-set.
109 
110 Our mutex implementation works as follows: After that we perform the atomic
111 test-and-set instruction on the memory word. If the test returns zero, we
112 know we got the lock first. If the test returns not zero, some other thread
113 was quicker and got the lock: then we spin in a loop reading the memory word,
114 waiting it to become zero. It is wise to just read the word in the loop, not
115 perform numerous test-and-set instructions, because they generate memory
116 traffic between the cache and the main memory. The read loop can just access
117 the cache, saving bus bandwidth.
118 
119 If we cannot acquire the mutex lock in the specified time, we reserve a cell
120 in the wait array, set the waiters byte in the mutex to 1. To avoid a race
121 condition, after setting the waiters byte and before suspending the waiting
122 thread, we still have to check that the mutex is reserved, because it may
123 have happened that the thread which was holding the mutex has just released
124 it and did not see the waiters byte set to 1, a case which would lead the
125 other thread to an infinite wait.
126 
127 LEMMA 1: After a thread resets the event of a mutex (or rw_lock), some
128 =======
129 thread will eventually call os_event_set() on that particular event.
130 Thus no infinite wait is possible in this case.
131 
132 Proof: After making the reservation the thread sets the waiters field in the
133 mutex to 1. Then it checks that the mutex is still reserved by some thread,
134 or it reserves the mutex for itself. In any case, some thread (which may be
135 also some earlier thread, not necessarily the one currently holding the mutex)
136 will set the waiters field to 0 in mutex_exit, and then call
137 os_event_set() with the mutex as an argument.
138 Q.E.D.
139 
140 LEMMA 2: If an os_event_set() call is made after some thread has called
141 =======
142 the os_event_reset() and before it starts wait on that event, the call
143 will not be lost to the second thread. This is true even if there is an
144 intervening call to os_event_reset() by another thread.
145 Thus no infinite wait is possible in this case.
146 
147 Proof (non-windows platforms): os_event_reset() returns a monotonically
148 increasing value of signal_count. This value is increased at every
149 call of os_event_set() If thread A has called os_event_reset() followed
150 by thread B calling os_event_set() and then some other thread C calling
151 os_event_reset(), the is_set flag of the event will be set to FALSE;
152 but now if thread A calls os_event_wait_low() with the signal_count
153 value returned from the earlier call of os_event_reset(), it will
154 return immediately without waiting.
155 Q.E.D.
156 
157 Proof (windows): If there is a writer thread which is forced to wait for
158 the lock, it may be able to set the state of rw_lock to RW_LOCK_WAIT_EX
159 The design of rw_lock ensures that there is one and only one thread
160 that is able to change the state to RW_LOCK_WAIT_EX and this thread is
161 guaranteed to acquire the lock after it is released by the current
162 holders and before any other waiter gets the lock.
163 On windows this thread waits on a separate event i.e.: wait_ex_event.
164 Since only one thread can wait on this event there is no chance
165 of this event getting reset before the writer starts wait on it.
166 Therefore, this thread is guaranteed to catch the os_set_event()
167 signalled unconditionally at the release of the lock.
168 Q.E.D. */
169 
170 /* Number of spin waits on mutexes: for performance monitoring */
171 
174 static ib_int64_t mutex_spin_round_count = 0;
177 static ib_int64_t mutex_spin_wait_count = 0;
180 static ib_int64_t mutex_os_wait_count = 0;
183 UNIV_INTERN ib_int64_t mutex_exit_count = 0;
184 
188 
190 UNIV_INTERN ibool sync_initialized = FALSE;
191 
193 typedef struct sync_level_struct sync_level_t;
195 typedef struct sync_thread_struct sync_thread_t;
196 
197 #ifdef UNIV_SYNC_DEBUG
198 
202 UNIV_INTERN sync_thread_t* sync_thread_level_arrays;
203 
205 UNIV_INTERN mutex_t sync_thread_mutex;
206 
207 # ifdef UNIV_PFS_MUTEX
208 UNIV_INTERN mysql_pfs_key_t sync_thread_mutex_key;
209 # endif /* UNIV_PFS_MUTEX */
210 #endif /* UNIV_SYNC_DEBUG */
211 
213 UNIV_INTERN ut_list_base_node_t mutex_list;
214 
217 
218 #ifdef UNIV_PFS_MUTEX
219 UNIV_INTERN mysql_pfs_key_t mutex_list_mutex_key;
220 #endif /* UNIV_PFS_MUTEX */
221 
222 #ifdef UNIV_SYNC_DEBUG
223 
224 UNIV_INTERN ibool sync_order_checks_on = FALSE;
225 #endif /* UNIV_SYNC_DEBUG */
226 
228 static const ulint SYNC_THREAD_N_LEVELS = 10000;
229 
230 typedef struct sync_arr_struct sync_arr_t;
231 
234  ulint in_use;
235  ulint n_elems;
236  ulint max_elems;
237  ulint next_free;
240 };
241 
247 };
248 
251  void* latch;
254  ulint level;
261 };
262 
263 /******************************************************************/
268 UNIV_INTERN
269 void
271 /*==============*/
272  mutex_t* mutex,
273 #ifdef UNIV_DEBUG
274  const char* cmutex_name,
275 # ifdef UNIV_SYNC_DEBUG
276  ulint level,
277 # endif /* UNIV_SYNC_DEBUG */
278 #endif /* UNIV_DEBUG */
279  const char* cfile_name,
280  ulint cline)
281 {
282 #if defined(HAVE_ATOMIC_BUILTINS)
283  mutex_reset_lock_word(mutex);
284 #else
286  mutex->lock_word = 0;
287 #endif
288  mutex->event = os_event_create(NULL);
289  mutex_set_waiters(mutex, 0);
290 #ifdef UNIV_DEBUG
291  mutex->magic_n = MUTEX_MAGIC_N;
292 #endif /* UNIV_DEBUG */
293 #ifdef UNIV_SYNC_DEBUG
294  mutex->line = 0;
295  mutex->file_name = "not yet reserved";
296  mutex->level = level;
297 #endif /* UNIV_SYNC_DEBUG */
298  mutex->cfile_name = cfile_name;
299  mutex->cline = cline;
300  mutex->count_os_wait = 0;
301 #ifdef UNIV_DEBUG
302  mutex->cmutex_name= cmutex_name;
303  mutex->count_using= 0;
304  mutex->mutex_type= 0;
305  mutex->lspent_time= 0;
306  mutex->lmax_spent_time= 0;
307  mutex->count_spin_loop= 0;
308  mutex->count_spin_rounds= 0;
309  mutex->count_os_yield= 0;
310 #endif /* UNIV_DEBUG */
311 
312  /* Check that lock_word is aligned; this is important on Intel */
313  ut_ad(((ulint)(&(mutex->lock_word))) % 4 == 0);
314 
315  /* NOTE! The very first mutexes are not put to the mutex list */
316 
317  if ((mutex == &mutex_list_mutex)
318 #ifdef UNIV_SYNC_DEBUG
319  || (mutex == &sync_thread_mutex)
320 #endif /* UNIV_SYNC_DEBUG */
321  ) {
322 
323  return;
324  }
325 
326  mutex_enter(&mutex_list_mutex);
327 
329  || UT_LIST_GET_FIRST(mutex_list)->magic_n == MUTEX_MAGIC_N);
330 
331  UT_LIST_ADD_FIRST(list, mutex_list, mutex);
332 
333  mutex_exit(&mutex_list_mutex);
334 }
335 
336 /******************************************************************/
341 UNIV_INTERN
342 void
344 /*============*/
345  mutex_t* mutex)
346 {
347  ut_ad(mutex_validate(mutex));
348  ut_a(mutex_get_lock_word(mutex) == 0);
349  ut_a(mutex_get_waiters(mutex) == 0);
350 
351 #ifdef UNIV_MEM_DEBUG
352  if (mutex == &mem_hash_mutex) {
354  ut_ad(UT_LIST_GET_FIRST(mutex_list) == &mem_hash_mutex);
355  UT_LIST_REMOVE(list, mutex_list, mutex);
356  goto func_exit;
357  }
358 #endif /* UNIV_MEM_DEBUG */
359 
360  if (mutex != &mutex_list_mutex
361 #ifdef UNIV_SYNC_DEBUG
362  && mutex != &sync_thread_mutex
363 #endif /* UNIV_SYNC_DEBUG */
364  ) {
365 
366  mutex_enter(&mutex_list_mutex);
367 
368  ut_ad(!UT_LIST_GET_PREV(list, mutex)
369  || UT_LIST_GET_PREV(list, mutex)->magic_n
370  == MUTEX_MAGIC_N);
371  ut_ad(!UT_LIST_GET_NEXT(list, mutex)
372  || UT_LIST_GET_NEXT(list, mutex)->magic_n
373  == MUTEX_MAGIC_N);
374 
375  UT_LIST_REMOVE(list, mutex_list, mutex);
376 
377  mutex_exit(&mutex_list_mutex);
378  }
379 
380  os_event_free(mutex->event);
381 #ifdef UNIV_MEM_DEBUG
382 func_exit:
383 #endif /* UNIV_MEM_DEBUG */
384 #if !defined(HAVE_ATOMIC_BUILTINS)
386 #endif
387  /* If we free the mutex protecting the mutex list (freeing is
388  not necessary), we have to reset the magic number AFTER removing
389  it from the list. */
390 #ifdef UNIV_DEBUG
391  mutex->magic_n = 0;
392 #endif /* UNIV_DEBUG */
393 }
394 
395 /********************************************************************/
400 UNIV_INTERN
401 ulint
403 /*====================*/
404  mutex_t* mutex,
405  const char* /*file_name __attribute__((unused))*/,
408  ulint /*line __attribute__((unused))*/)
410 {
411  ut_ad(mutex_validate(mutex));
412 
413  if (!mutex_test_and_set(mutex)) {
414 
415  ut_d(mutex->thread_id = os_thread_get_curr_id());
416 #ifdef UNIV_SYNC_DEBUG
417  mutex_set_debug_info(mutex, file_name, line);
418 #endif
419 
420  return(0); /* Succeeded! */
421  }
422 
423  return(1);
424 }
425 
426 #ifdef UNIV_DEBUG
427 /******************************************************************/
430 UNIV_INTERN
431 ibool
432 mutex_validate(
433 /*===========*/
434  const mutex_t* mutex)
435 {
436  ut_a(mutex);
437  ut_a(mutex->magic_n == MUTEX_MAGIC_N);
438 
439  return(TRUE);
440 }
441 
442 /******************************************************************/
446 UNIV_INTERN
447 ibool
448 mutex_own(
449 /*======*/
450  const mutex_t* mutex)
451 {
452  ut_ad(mutex_validate(mutex));
453 
454  return(mutex_get_lock_word(mutex) == 1
455  && os_thread_eq(mutex->thread_id, os_thread_get_curr_id()));
456 }
457 #endif /* UNIV_DEBUG */
458 
459 /******************************************************************/
461 UNIV_INTERN
462 void
463 mutex_set_waiters(
464 /*==============*/
465  mutex_t* mutex,
466  ulint n)
467 {
468  volatile ulint* ptr; /* declared volatile to ensure that
469  the value is stored to memory */
470  ut_ad(mutex);
471 
472  ptr = &(mutex->waiters);
473 
474  *ptr = n; /* Here we assume that the write of a single
475  word in memory is atomic */
476 }
477 
478 /******************************************************************/
482 UNIV_INTERN
483 void
484 mutex_spin_wait(
485 /*============*/
486  mutex_t* mutex,
487  const char* file_name,
489  ulint line)
490 {
491  ulint index; /* index of the reserved wait cell */
492  ulint i; /* spin round count */
493 #ifdef UNIV_DEBUG
494  ib_int64_t lstart_time = 0, lfinish_time; /* for timing os_wait */
495  ulint ltime_diff;
496  ulint sec;
497  ulint ms;
498  uint timer_started = 0;
499 #endif /* UNIV_DEBUG */
500  ut_ad(mutex);
501 
502  /* This update is not thread safe, but we don't mind if the count
503  isn't exact. Moved out of ifdef that follows because we are willing
504  to sacrifice the cost of counting this as the data is valuable.
505  Count the number of calls to mutex_spin_wait. */
506  mutex_spin_wait_count++;
507 
508 mutex_loop:
509 
510  i = 0;
511 
512  /* Spin waiting for the lock word to become zero. Note that we do
513  not have to assume that the read access to the lock word is atomic,
514  as the actual locking is always committed with atomic test-and-set.
515  In reality, however, all processors probably have an atomic read of
516  a memory word. */
517 
518 spin_loop:
519  ut_d(mutex->count_spin_loop++);
520 
521  while (mutex_get_lock_word(mutex) != 0 && i < SYNC_SPIN_ROUNDS) {
522  if (srv_spin_wait_delay) {
523  ut_delay(ut_rnd_interval(0, srv_spin_wait_delay));
524  }
525 
526  i++;
527  }
528 
529  if (i == SYNC_SPIN_ROUNDS) {
530 #ifdef UNIV_DEBUG
531  mutex->count_os_yield++;
532 #ifndef UNIV_HOTBACKUP
533  if (timed_mutexes && timer_started == 0) {
534  ut_usectime(&sec, &ms);
535  lstart_time= (ib_int64_t)sec * 1000000 + ms;
536  timer_started = 1;
537  }
538 #endif /* UNIV_HOTBACKUP */
539 #endif /* UNIV_DEBUG */
540  os_thread_yield();
541  }
542 
543 #ifdef UNIV_SRV_PRINT_LATCH_WAITS
544  fprintf(stderr,
545  "Thread %lu spin wait mutex at %p"
546  " cfile %s cline %lu rnds %lu\n",
547  (ulong) os_thread_pf(os_thread_get_curr_id()), (void*) mutex,
549  (ulong) mutex->cline, (ulong) i);
550 #endif
551 
552  mutex_spin_round_count += i;
553 
554  ut_d(mutex->count_spin_rounds += i);
555 
556  if (mutex_test_and_set(mutex) == 0) {
557  /* Succeeded! */
558 
559  ut_d(mutex->thread_id = os_thread_get_curr_id());
560 #ifdef UNIV_SYNC_DEBUG
561  mutex_set_debug_info(mutex, file_name, line);
562 #endif
563 
564  goto finish_timing;
565  }
566 
567  /* We may end up with a situation where lock_word is 0 but the OS
568  fast mutex is still reserved. On FreeBSD the OS does not seem to
569  schedule a thread which is constantly calling pthread_mutex_trylock
570  (in mutex_test_and_set implementation). Then we could end up
571  spinning here indefinitely. The following 'i++' stops this infinite
572  spin. */
573 
574  i++;
575 
576  if (i < SYNC_SPIN_ROUNDS) {
577  goto spin_loop;
578  }
579 
580  sync_array_reserve_cell(sync_primary_wait_array, mutex,
581  SYNC_MUTEX, file_name, line, &index);
582 
583  /* The memory order of the array reservation and the change in the
584  waiters field is important: when we suspend a thread, we first
585  reserve the cell and then set waiters field to 1. When threads are
586  released in mutex_exit, the waiters field is first set to zero and
587  then the event is set to the signaled state. */
588 
589  mutex_set_waiters(mutex, 1);
590 
591  /* Try to reserve still a few times */
592  for (i = 0; i < 4; i++) {
593  if (mutex_test_and_set(mutex) == 0) {
594  /* Succeeded! Free the reserved wait cell */
595 
596  sync_array_free_cell(sync_primary_wait_array, index);
597 
598  ut_d(mutex->thread_id = os_thread_get_curr_id());
599 #ifdef UNIV_SYNC_DEBUG
600  mutex_set_debug_info(mutex, file_name, line);
601 #endif
602 
603 #ifdef UNIV_SRV_PRINT_LATCH_WAITS
604  fprintf(stderr, "Thread %lu spin wait succeeds at 2:"
605  " mutex at %p\n",
607  (void*) mutex);
608 #endif
609 
610  goto finish_timing;
611 
612  /* Note that in this case we leave the waiters field
613  set to 1. We cannot reset it to zero, as we do not
614  know if there are other waiters. */
615  }
616  }
617 
618  /* Now we know that there has been some thread holding the mutex
619  after the change in the wait array and the waiters field was made.
620  Now there is no risk of infinite wait on the event. */
621 
622 #ifdef UNIV_SRV_PRINT_LATCH_WAITS
623  fprintf(stderr,
624  "Thread %lu OS wait mutex at %p cfile %s cline %lu rnds %lu\n",
625  (ulong) os_thread_pf(os_thread_get_curr_id()), (void*) mutex,
627  (ulong) mutex->cline, (ulong) i);
628 #endif
629 
630  mutex_os_wait_count++;
631 
632  mutex->count_os_wait++;
633 #ifdef UNIV_DEBUG
634  /* !!!!! Sometimes os_wait can be called without os_thread_yield */
635 #ifndef UNIV_HOTBACKUP
636  if (timed_mutexes == 1 && timer_started == 0) {
637  ut_usectime(&sec, &ms);
638  lstart_time= (ib_int64_t)sec * 1000000 + ms;
639  timer_started = 1;
640  }
641 #endif /* UNIV_HOTBACKUP */
642 #endif /* UNIV_DEBUG */
643 
644  sync_array_wait_event(sync_primary_wait_array, index);
645  goto mutex_loop;
646 
647 finish_timing:
648 #ifdef UNIV_DEBUG
649  if (timed_mutexes == 1 && timer_started==1) {
650  ut_usectime(&sec, &ms);
651  lfinish_time= (ib_int64_t)sec * 1000000 + ms;
652 
653  ltime_diff= (ulint) (lfinish_time - lstart_time);
654  mutex->lspent_time += ltime_diff;
655 
656  if (mutex->lmax_spent_time < ltime_diff) {
657  mutex->lmax_spent_time= ltime_diff;
658  }
659  }
660 #endif /* UNIV_DEBUG */
661  return;
662 }
663 
664 /******************************************************************/
666 UNIV_INTERN
667 void
668 mutex_signal_object(
669 /*================*/
670  mutex_t* mutex)
671 {
672  mutex_set_waiters(mutex, 0);
673 
674  /* The memory order of resetting the waiters field and
675  signaling the object is important. See LEMMA 1 above. */
676  os_event_set(mutex->event);
677  sync_array_object_signalled(sync_primary_wait_array);
678 }
679 
680 #ifdef UNIV_SYNC_DEBUG
681 /******************************************************************/
683 UNIV_INTERN
684 void
685 mutex_set_debug_info(
686 /*=================*/
687  mutex_t* mutex,
688  const char* file_name,
689  ulint line)
690 {
691  ut_ad(mutex);
692  ut_ad(file_name);
693 
694  sync_thread_add_level(mutex, mutex->level);
695 
696  mutex->file_name = file_name;
697  mutex->line = line;
698 }
699 
700 /******************************************************************/
702 UNIV_INTERN
703 void
704 mutex_get_debug_info(
705 /*=================*/
706  mutex_t* mutex,
707  const char** file_name,
708  ulint* line,
709  os_thread_id_t* thread_id)
711 {
712  ut_ad(mutex);
713 
714  *file_name = mutex->file_name;
715  *line = mutex->line;
716  *thread_id = mutex->thread_id;
717 }
718 
719 /******************************************************************/
721 static
722 void
723 mutex_list_print_info(
724 /*==================*/
725  FILE* file)
726 {
727  mutex_t* mutex;
728  const char* file_name;
729  ulint line;
730  os_thread_id_t thread_id;
731  ulint count = 0;
732 
733  fputs("----------\n"
734  "MUTEX INFO\n"
735  "----------\n", file);
736 
737  mutex_enter(&mutex_list_mutex);
738 
739  mutex = UT_LIST_GET_FIRST(mutex_list);
740 
741  while (mutex != NULL) {
742  count++;
743 
744  if (mutex_get_lock_word(mutex) != 0) {
745  mutex_get_debug_info(mutex, &file_name, &line,
746  &thread_id);
747  fprintf(file,
748  "Locked mutex: addr %p thread %ld"
749  " file %s line %ld\n",
750  (void*) mutex, os_thread_pf(thread_id),
751  file_name, line);
752  }
753 
754  mutex = UT_LIST_GET_NEXT(list, mutex);
755  }
756 
757  fprintf(file, "Total number of mutexes %ld\n", count);
758 
759  mutex_exit(&mutex_list_mutex);
760 }
761 
762 /******************************************************************/
765 UNIV_INTERN
766 ulint
767 mutex_n_reserved(void)
768 /*==================*/
769 {
770  mutex_t* mutex;
771  ulint count = 0;
772 
773  mutex_enter(&mutex_list_mutex);
774 
775  for (mutex = UT_LIST_GET_FIRST(mutex_list);
776  mutex != NULL;
777  mutex = UT_LIST_GET_NEXT(list, mutex)) {
778 
779  if (mutex_get_lock_word(mutex) != 0) {
780 
781  count++;
782  }
783  }
784 
785  mutex_exit(&mutex_list_mutex);
786 
787  ut_a(count >= 1);
788 
789  /* Subtract one, because this function itself was holding
790  one mutex (mutex_list_mutex) */
791 
792  return(count - 1);
793 }
794 
795 /******************************************************************/
799 UNIV_INTERN
800 ibool
801 sync_all_freed(void)
802 /*================*/
803 {
804  return(mutex_n_reserved() + rw_lock_n_locked() == 0);
805 }
806 
807 /******************************************************************/
810 static
812 sync_thread_level_arrays_find_slot(void)
813 /*====================================*/
814 
815 {
816  ulint i;
817  os_thread_id_t id;
818 
819  id = os_thread_get_curr_id();
820 
821  for (i = 0; i < OS_THREAD_MAX_N; i++) {
822  sync_thread_t* slot;
823 
824  slot = &sync_thread_level_arrays[i];
825 
826  if (slot->levels && os_thread_eq(slot->id, id)) {
827 
828  return(slot);
829  }
830  }
831 
832  return(NULL);
833 }
834 
835 /******************************************************************/
838 static
840 sync_thread_level_arrays_find_free(void)
841 /*====================================*/
842 
843 {
844  ulint i;
845 
846  for (i = 0; i < OS_THREAD_MAX_N; i++) {
847  sync_thread_t* slot;
848 
849  slot = &sync_thread_level_arrays[i];
850 
851  if (slot->levels == NULL) {
852 
853  return(slot);
854  }
855  }
856 
857  return(NULL);
858 }
859 
860 /******************************************************************/
863 static
864 void
865 sync_print_warning(
866 /*===============*/
867  const sync_level_t* slot)
869 {
870  mutex_t* mutex;
871 
872  mutex = slot->latch;
873 
874  if (mutex->magic_n == MUTEX_MAGIC_N) {
875  fprintf(stderr,
876  "Mutex created at %s %lu\n",
878  (ulong) mutex->cline);
879 
880  if (mutex_get_lock_word(mutex) != 0) {
881  ulint line;
882  const char* file_name;
883  os_thread_id_t thread_id;
884 
885  mutex_get_debug_info(
886  mutex, &file_name, &line, &thread_id);
887 
888  fprintf(stderr,
889  "InnoDB: Locked mutex:"
890  " addr %p thread %ld file %s line %ld\n",
891  (void*) mutex, os_thread_pf(thread_id),
892  file_name, (ulong) line);
893  } else {
894  fputs("Not locked\n", stderr);
895  }
896  } else {
897  rw_lock_t* lock = slot->latch;
898 
899  rw_lock_print(lock);
900  }
901 }
902 
903 /******************************************************************/
907 static
908 ibool
909 sync_thread_levels_g(
910 /*=================*/
911  sync_arr_t* arr,
913  ulint limit,
914  ulint warn)
915 {
916  ulint i;
917 
918  for (i = 0; i < arr->n_elems; i++) {
919  const sync_level_t* slot;
920 
921  slot = &arr->elems[i];
922 
923  if (slot->latch != NULL && slot->level <= limit) {
924  if (warn) {
925  fprintf(stderr,
926  "InnoDB: sync levels should be"
927  " > %lu but a level is %lu\n",
928  (ulong) limit, (ulong) slot->level);
929  sync_print_warning(slot);
930  }
931 
932  return(FALSE);
933 
934  }
935  }
936 
937  return(TRUE);
938 }
939 
940 /******************************************************************/
943 static
944 const sync_level_t*
945 sync_thread_levels_contain(
946 /*=======================*/
947  sync_arr_t* arr,
949  ulint level)
950 {
951  ulint i;
952 
953  for (i = 0; i < arr->n_elems; i++) {
954  const sync_level_t* slot;
955 
956  slot = &arr->elems[i];
957 
958  if (slot->latch != NULL && slot->level == level) {
959  return(slot);
960  }
961  }
962 
963  return(NULL);
964 }
965 
966 /******************************************************************/
970 UNIV_INTERN
971 void*
972 sync_thread_levels_contains(
973 /*========================*/
974  ulint level)
976 {
977  ulint i;
978  sync_arr_t* arr;
979  sync_thread_t* thread_slot;
980 
981  if (!sync_order_checks_on) {
982 
983  return(NULL);
984  }
985 
986  mutex_enter(&sync_thread_mutex);
987 
988  thread_slot = sync_thread_level_arrays_find_slot();
989 
990  if (thread_slot == NULL) {
991 
992  mutex_exit(&sync_thread_mutex);
993 
994  return(NULL);
995  }
996 
997  arr = thread_slot->levels;
998 
999  for (i = 0; i < arr->n_elems; i++) {
1000  sync_level_t* slot;
1001 
1002  slot = &arr->elems[i];
1003 
1004  if (slot->latch != NULL && slot->level == level) {
1005 
1006  mutex_exit(&sync_thread_mutex);
1007  return(slot->latch);
1008  }
1009  }
1010 
1011  mutex_exit(&sync_thread_mutex);
1012 
1013  return(NULL);
1014 }
1015 
1016 /******************************************************************/
1019 UNIV_INTERN
1020 void*
1021 sync_thread_levels_nonempty_gen(
1022 /*============================*/
1023  ibool dict_mutex_allowed)
1027 {
1028  ulint i;
1029  sync_arr_t* arr;
1030  sync_thread_t* thread_slot;
1031 
1032  if (!sync_order_checks_on) {
1033 
1034  return(NULL);
1035  }
1036 
1037  mutex_enter(&sync_thread_mutex);
1038 
1039  thread_slot = sync_thread_level_arrays_find_slot();
1040 
1041  if (thread_slot == NULL) {
1042 
1043  mutex_exit(&sync_thread_mutex);
1044 
1045  return(NULL);
1046  }
1047 
1048  arr = thread_slot->levels;
1049 
1050  for (i = 0; i < arr->n_elems; ++i) {
1051  const sync_level_t* slot;
1052 
1053  slot = &arr->elems[i];
1054 
1055  if (slot->latch != NULL
1056  && (!dict_mutex_allowed
1057  || (slot->level != SYNC_DICT
1058  && slot->level != SYNC_DICT_OPERATION))) {
1059 
1060  mutex_exit(&sync_thread_mutex);
1061  ut_error;
1062 
1063  return(slot->latch);
1064  }
1065  }
1066 
1067  mutex_exit(&sync_thread_mutex);
1068 
1069  return(NULL);
1070 }
1071 
1072 /******************************************************************/
1075 UNIV_INTERN
1076 ibool
1077 sync_thread_levels_empty(void)
1078 /*==========================*/
1079 {
1080  return(sync_thread_levels_empty_gen(FALSE));
1081 }
1082 
1083 /******************************************************************/
1087 UNIV_INTERN
1088 void
1089 sync_thread_add_level(
1090 /*==================*/
1091  void* latch,
1092  ulint level)
1094 {
1095  ulint i;
1096  sync_level_t* slot;
1097  sync_arr_t* array;
1098  sync_thread_t* thread_slot;
1099 
1100  if (!sync_order_checks_on) {
1101 
1102  return;
1103  }
1104 
1105  if ((latch == (void*)&sync_thread_mutex)
1106  || (latch == (void*)&mutex_list_mutex)
1107  || (latch == (void*)&rw_lock_debug_mutex)
1108  || (latch == (void*)&rw_lock_list_mutex)) {
1109 
1110  return;
1111  }
1112 
1113  if (level == SYNC_LEVEL_VARYING) {
1114 
1115  return;
1116  }
1117 
1118  mutex_enter(&sync_thread_mutex);
1119 
1120  thread_slot = sync_thread_level_arrays_find_slot();
1121 
1122  if (thread_slot == NULL) {
1123  ulint sz;
1124 
1125  sz = sizeof(*array)
1126  + (sizeof(*array->elems) * SYNC_THREAD_N_LEVELS);
1127 
1128  /* We have to allocate the level array for a new thread */
1129  array = calloc(sz, sizeof(char));
1130  ut_a(array != NULL);
1131 
1132  array->next_free = ULINT_UNDEFINED;
1133  array->max_elems = SYNC_THREAD_N_LEVELS;
1134  array->elems = (sync_level_t*) &array[1];
1135 
1136  thread_slot = sync_thread_level_arrays_find_free();
1137  thread_slot->levels = array;
1138  thread_slot->id = os_thread_get_curr_id();
1139  }
1140 
1141  array = thread_slot->levels;
1142 
1143  /* NOTE that there is a problem with _NODE and _LEAF levels: if the
1144  B-tree height changes, then a leaf can change to an internal node
1145  or the other way around. We do not know at present if this can cause
1146  unnecessary assertion failures below. */
1147 
1148  switch (level) {
1149  case SYNC_NO_ORDER_CHECK:
1150  case SYNC_EXTERN_STORAGE:
1151  case SYNC_TREE_NODE_FROM_HASH:
1152  /* Do no order checking */
1153  break;
1154  case SYNC_TRX_SYS_HEADER:
1155  if (srv_is_being_started) {
1156  /* This is violated during trx_sys_create_rsegs()
1157  when creating additional rollback segments when
1158  upgrading in innobase_start_or_create_for_mysql(). */
1159  break;
1160  }
1161  case SYNC_MEM_POOL:
1162  case SYNC_MEM_HASH:
1163  case SYNC_RECV:
1164  case SYNC_WORK_QUEUE:
1165  case SYNC_LOG:
1166  case SYNC_LOG_FLUSH_ORDER:
1167  case SYNC_THR_LOCAL:
1168  case SYNC_ANY_LATCH:
1169  case SYNC_FILE_FORMAT_TAG:
1170  case SYNC_DOUBLEWRITE:
1171  case SYNC_SEARCH_SYS:
1172  case SYNC_SEARCH_SYS_CONF:
1173  case SYNC_TRX_LOCK_HEAP:
1174  case SYNC_KERNEL:
1175  case SYNC_IBUF_BITMAP_MUTEX:
1176  case SYNC_RSEG:
1177  case SYNC_TRX_UNDO:
1178  case SYNC_PURGE_LATCH:
1179  case SYNC_PURGE_QUEUE:
1180  case SYNC_DICT_AUTOINC_MUTEX:
1181  case SYNC_DICT_OPERATION:
1182  case SYNC_DICT_HEADER:
1183  case SYNC_TRX_I_S_RWLOCK:
1184  case SYNC_TRX_I_S_LAST_READ:
1185  if (!sync_thread_levels_g(array, level, TRUE)) {
1186  fprintf(stderr,
1187  "InnoDB: sync_thread_levels_g(array, %lu)"
1188  " does not hold!\n", level);
1189  ut_error;
1190  }
1191  break;
1192  case SYNC_BUF_FLUSH_LIST:
1193  case SYNC_BUF_POOL:
1194  /* We can have multiple mutexes of this type therefore we
1195  can only check whether the greater than condition holds. */
1196  if (!sync_thread_levels_g(array, level-1, TRUE)) {
1197  fprintf(stderr,
1198  "InnoDB: sync_thread_levels_g(array, %lu)"
1199  " does not hold!\n", level-1);
1200  ut_error;
1201  }
1202  break;
1203 
1204  case SYNC_BUF_BLOCK:
1205  /* Either the thread must own the buffer pool mutex
1206  (buf_pool->mutex), or it is allowed to latch only ONE
1207  buffer block (block->mutex or buf_pool->zip_mutex). */
1208  if (!sync_thread_levels_g(array, level, FALSE)) {
1209  ut_a(sync_thread_levels_g(array, level - 1, TRUE));
1210  ut_a(sync_thread_levels_contain(array, SYNC_BUF_POOL));
1211  }
1212  break;
1213  case SYNC_REC_LOCK:
1214  if (sync_thread_levels_contain(array, SYNC_KERNEL)) {
1215  ut_a(sync_thread_levels_g(array, SYNC_REC_LOCK - 1,
1216  TRUE));
1217  } else {
1218  ut_a(sync_thread_levels_g(array, SYNC_REC_LOCK, TRUE));
1219  }
1220  break;
1221  case SYNC_IBUF_BITMAP:
1222  /* Either the thread must own the master mutex to all
1223  the bitmap pages, or it is allowed to latch only ONE
1224  bitmap page. */
1225  if (sync_thread_levels_contain(array,
1226  SYNC_IBUF_BITMAP_MUTEX)) {
1227  ut_a(sync_thread_levels_g(array, SYNC_IBUF_BITMAP - 1,
1228  TRUE));
1229  } else {
1230  /* This is violated during trx_sys_create_rsegs()
1231  when creating additional rollback segments when
1232  upgrading in innobase_start_or_create_for_mysql(). */
1233  ut_a(srv_is_being_started
1234  || sync_thread_levels_g(array, SYNC_IBUF_BITMAP,
1235  TRUE));
1236  }
1237  break;
1238  case SYNC_FSP_PAGE:
1239  ut_a(sync_thread_levels_contain(array, SYNC_FSP));
1240  break;
1241  case SYNC_FSP:
1242  ut_a(sync_thread_levels_contain(array, SYNC_FSP)
1243  || sync_thread_levels_g(array, SYNC_FSP, TRUE));
1244  break;
1245  case SYNC_TRX_UNDO_PAGE:
1246  /* Purge is allowed to read in as many UNDO pages as it likes,
1247  there was a bogus rule here earlier that forced the caller to
1248  acquire the purge_sys_t::mutex. The purge mutex did not really
1249  protect anything because it was only ever acquired by the
1250  single purge thread. The purge thread can read the UNDO pages
1251  without any covering mutex. */
1252 
1253  ut_a(sync_thread_levels_contain(array, SYNC_TRX_UNDO)
1254  || sync_thread_levels_contain(array, SYNC_RSEG)
1255  || sync_thread_levels_g(array, level - 1, TRUE));
1256  break;
1257  case SYNC_RSEG_HEADER:
1258  ut_a(sync_thread_levels_contain(array, SYNC_RSEG));
1259  break;
1260  case SYNC_RSEG_HEADER_NEW:
1261  ut_a(sync_thread_levels_contain(array, SYNC_KERNEL)
1262  && sync_thread_levels_contain(array, SYNC_FSP_PAGE));
1263  break;
1264  case SYNC_TREE_NODE:
1265  ut_a(sync_thread_levels_contain(array, SYNC_INDEX_TREE)
1266  || sync_thread_levels_contain(array, SYNC_DICT_OPERATION)
1267  || sync_thread_levels_g(array, SYNC_TREE_NODE - 1, TRUE));
1268  break;
1269  case SYNC_TREE_NODE_NEW:
1270  ut_a(sync_thread_levels_contain(array, SYNC_FSP_PAGE)
1271  || sync_thread_levels_contain(array, SYNC_IBUF_MUTEX));
1272  break;
1273  case SYNC_INDEX_TREE:
1274  if (sync_thread_levels_contain(array, SYNC_IBUF_MUTEX)
1275  && sync_thread_levels_contain(array, SYNC_FSP)) {
1276  ut_a(sync_thread_levels_g(array, SYNC_FSP_PAGE - 1,
1277  TRUE));
1278  } else {
1279  ut_a(sync_thread_levels_g(array, SYNC_TREE_NODE - 1,
1280  TRUE));
1281  }
1282  break;
1283  case SYNC_IBUF_MUTEX:
1284  ut_a(sync_thread_levels_g(array, SYNC_FSP_PAGE - 1, TRUE));
1285  break;
1286  case SYNC_IBUF_PESS_INSERT_MUTEX:
1287  ut_a(sync_thread_levels_g(array, SYNC_FSP - 1, TRUE));
1288  ut_a(!sync_thread_levels_contain(array, SYNC_IBUF_MUTEX));
1289  break;
1290  case SYNC_IBUF_HEADER:
1291  ut_a(sync_thread_levels_g(array, SYNC_FSP - 1, TRUE));
1292  ut_a(!sync_thread_levels_contain(array, SYNC_IBUF_MUTEX));
1293  ut_a(!sync_thread_levels_contain(array,
1294  SYNC_IBUF_PESS_INSERT_MUTEX));
1295  break;
1296  case SYNC_DICT:
1297 #ifdef UNIV_DEBUG
1298  ut_a(buf_debug_prints
1299  || sync_thread_levels_g(array, SYNC_DICT, TRUE));
1300 #else /* UNIV_DEBUG */
1301  ut_a(sync_thread_levels_g(array, SYNC_DICT, TRUE));
1302 #endif /* UNIV_DEBUG */
1303  break;
1304  default:
1305  ut_error;
1306  }
1307 
1308  if (array->next_free == ULINT_UNDEFINED) {
1309  ut_a(array->n_elems < array->max_elems);
1310 
1311  i = array->n_elems++;
1312  } else {
1313  i = array->next_free;
1314  array->next_free = array->elems[i].level;
1315  }
1316 
1317  ut_a(i < array->n_elems);
1318  ut_a(i != ULINT_UNDEFINED);
1319 
1320  ++array->in_use;
1321 
1322  slot = &array->elems[i];
1323 
1324  ut_a(slot->latch == NULL);
1325 
1326  slot->latch = latch;
1327  slot->level = level;
1328 
1329  mutex_exit(&sync_thread_mutex);
1330 }
1331 
1332 /******************************************************************/
1337 UNIV_INTERN
1338 ibool
1339 sync_thread_reset_level(
1340 /*====================*/
1341  void* latch)
1342 {
1343  sync_arr_t* array;
1344  sync_thread_t* thread_slot;
1345  ulint i;
1346 
1347  if (!sync_order_checks_on) {
1348 
1349  return(FALSE);
1350  }
1351 
1352  if ((latch == (void*)&sync_thread_mutex)
1353  || (latch == (void*)&mutex_list_mutex)
1354  || (latch == (void*)&rw_lock_debug_mutex)
1355  || (latch == (void*)&rw_lock_list_mutex)) {
1356 
1357  return(FALSE);
1358  }
1359 
1360  mutex_enter(&sync_thread_mutex);
1361 
1362  thread_slot = sync_thread_level_arrays_find_slot();
1363 
1364  if (thread_slot == NULL) {
1365 
1366  ut_error;
1367 
1368  mutex_exit(&sync_thread_mutex);
1369  return(FALSE);
1370  }
1371 
1372  array = thread_slot->levels;
1373 
1374  for (i = 0; i < array->n_elems; i++) {
1375  sync_level_t* slot;
1376 
1377  slot = &array->elems[i];
1378 
1379  if (slot->latch != latch) {
1380  continue;
1381  }
1382 
1383  slot->latch = NULL;
1384 
1385  /* Update the free slot list. See comment in sync_level_t
1386  for the level field. */
1387  slot->level = array->next_free;
1388  array->next_free = i;
1389 
1390  ut_a(array->in_use >= 1);
1391  --array->in_use;
1392 
1393  /* If all cells are idle then reset the free
1394  list. The assumption is that this will save
1395  time when we need to scan up to n_elems. */
1396 
1397  if (array->in_use == 0) {
1398  array->n_elems = 0;
1399  array->next_free = ULINT_UNDEFINED;
1400  }
1401 
1402  mutex_exit(&sync_thread_mutex);
1403 
1404  return(TRUE);
1405  }
1406 
1407  if (((mutex_t*) latch)->magic_n != MUTEX_MAGIC_N) {
1408  rw_lock_t* rw_lock;
1409 
1410  rw_lock = (rw_lock_t*) latch;
1411 
1412  if (rw_lock->level == SYNC_LEVEL_VARYING) {
1413  mutex_exit(&sync_thread_mutex);
1414 
1415  return(TRUE);
1416  }
1417  }
1418 
1419  ut_error;
1420 
1421  mutex_exit(&sync_thread_mutex);
1422 
1423  return(FALSE);
1424 }
1425 #endif /* UNIV_SYNC_DEBUG */
1426 
1427 /******************************************************************/
1429 UNIV_INTERN
1430 void
1432 /*===========*/
1433 {
1434  ut_a(sync_initialized == FALSE);
1435 
1436  sync_initialized = TRUE;
1437 
1438  /* Create the primary system wait array which is protected by an OS
1439  mutex */
1440 
1441  sync_primary_wait_array = sync_array_create(OS_THREAD_MAX_N,
1443 #ifdef UNIV_SYNC_DEBUG
1444  /* Create the thread latch level array where the latch levels
1445  are stored for each OS thread */
1446 
1447  sync_thread_level_arrays = calloc(
1448  sizeof(sync_thread_t), OS_THREAD_MAX_N);
1449  ut_a(sync_thread_level_arrays != NULL);
1450 
1451 #endif /* UNIV_SYNC_DEBUG */
1452  /* Init the mutex list and create the mutex to protect it. */
1453 
1455  mutex_create(mutex_list_mutex_key, &mutex_list_mutex,
1456  SYNC_NO_ORDER_CHECK);
1457 #ifdef UNIV_SYNC_DEBUG
1458  mutex_create(sync_thread_mutex_key, &sync_thread_mutex,
1459  SYNC_NO_ORDER_CHECK);
1460 #endif /* UNIV_SYNC_DEBUG */
1461 
1462  /* Init the rw-lock list and create the mutex to protect it. */
1463 
1464  UT_LIST_INIT(rw_lock_list);
1465  mutex_create(rw_lock_list_mutex_key, &rw_lock_list_mutex,
1466  SYNC_NO_ORDER_CHECK);
1467 
1468 #ifdef UNIV_SYNC_DEBUG
1469  mutex_create(rw_lock_debug_mutex_key, &rw_lock_debug_mutex,
1470  SYNC_NO_ORDER_CHECK);
1471 
1472  rw_lock_debug_event = os_event_create(NULL);
1473  rw_lock_debug_waiters = FALSE;
1474 #endif /* UNIV_SYNC_DEBUG */
1475 }
1476 
1477 #ifdef UNIV_SYNC_DEBUG
1478 /******************************************************************/
1480 static
1481 void
1482 sync_thread_level_arrays_free(void)
1483 /*===============================*/
1484 
1485 {
1486  ulint i;
1487 
1488  for (i = 0; i < OS_THREAD_MAX_N; i++) {
1489  sync_thread_t* slot;
1490 
1491  slot = &sync_thread_level_arrays[i];
1492 
1493  /* If this slot was allocated then free the slot memory too. */
1494  if (slot->levels != NULL) {
1495  free(slot->levels);
1496  slot->levels = NULL;
1497  }
1498  }
1499 
1500  free(sync_thread_level_arrays);
1501  sync_thread_level_arrays = NULL;
1502 }
1503 #endif /* UNIV_SYNC_DEBUG */
1504 
1505 /******************************************************************/
1508 UNIV_INTERN
1509 void
1511 /*===========*/
1512 {
1513  mutex_t* mutex;
1514 
1516 
1517  for (mutex = UT_LIST_GET_FIRST(mutex_list);
1518  mutex != NULL;
1519  /* No op */) {
1520 
1521 #ifdef UNIV_MEM_DEBUG
1522  if (mutex == &mem_hash_mutex) {
1523  mutex = UT_LIST_GET_NEXT(list, mutex);
1524  continue;
1525  }
1526 #endif /* UNIV_MEM_DEBUG */
1527 
1528  mutex_free(mutex);
1529 
1530  mutex = UT_LIST_GET_FIRST(mutex_list);
1531  }
1532 
1533  mutex_free(&mutex_list_mutex);
1534 #ifdef UNIV_SYNC_DEBUG
1535  mutex_free(&sync_thread_mutex);
1536 
1537  /* Switch latching order checks on in sync0sync.c */
1538  sync_order_checks_on = FALSE;
1539 
1540  sync_thread_level_arrays_free();
1541 #endif /* UNIV_SYNC_DEBUG */
1542 
1543  sync_initialized = FALSE;
1544 }
1545 
1546 /*******************************************************************/
1548 UNIV_INTERN
1549 void
1551 /*=================*/
1552  FILE* file)
1553 {
1554 #ifdef UNIV_SYNC_DEBUG
1555  fprintf(file, "Mutex exits %llu, rws exits %llu, rwx exits %llu\n",
1557 #endif
1558 
1559  fprintf(file,
1560  "Mutex spin waits %"PRId64", rounds %"PRId64", "
1561  "OS waits %"PRId64"\n"
1562  "RW-shared spins %"PRId64", rounds %"PRId64", OS waits %"PRId64";"
1563  " RW-excl spins %"PRId64", rounds %"PRId64", OS waits %"PRId64"\n",
1564  mutex_spin_wait_count,
1565  mutex_spin_round_count,
1566  mutex_os_wait_count,
1573 
1574  fprintf(file,
1575  "Spin rounds per wait: %.2f mutex, %.2f RW-shared, "
1576  "%.2f RW-excl\n",
1577  (double) mutex_spin_round_count /
1578  (mutex_spin_wait_count ? mutex_spin_wait_count : 1),
1579  (double) rw_s_spin_round_count /
1581  (double) rw_x_spin_round_count /
1583 }
1584 
1585 /*******************************************************************/
1587 UNIV_INTERN
1588 void
1590 /*=======*/
1591  FILE* file)
1592 {
1593 #ifdef UNIV_SYNC_DEBUG
1594  mutex_list_print_info(file);
1595 
1596  rw_lock_list_print_info(file);
1597 #endif /* UNIV_SYNC_DEBUG */
1598 
1600 
1601  sync_print_wait_info(file);
1602 }