MySQL 5.6.14 Source Code Document
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Macros Groups Pages
sync0sync.cc
Go to the documentation of this file.
1 /*****************************************************************************
2 
3 Copyright (c) 1995, 2011, Oracle and/or its affiliates. All Rights Reserved.
4 Copyright (c) 2008, Google Inc.
5 
6 Portions of this file contain modifications contributed and copyrighted by
7 Google, Inc. Those modifications are gratefully acknowledged and are described
8 briefly in the InnoDB documentation. The contributions by Google are
9 incorporated with their permission, and subject to the conditions contained in
10 the file COPYING.Google.
11 
12 This program is free software; you can redistribute it and/or modify it under
13 the terms of the GNU General Public License as published by the Free Software
14 Foundation; version 2 of the License.
15 
16 This program is distributed in the hope that it will be useful, but WITHOUT
17 ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS
18 FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details.
19 
20 You should have received a copy of the GNU General Public License along with
21 this program; if not, write to the Free Software Foundation, Inc.,
22 51 Franklin Street, Suite 500, Boston, MA 02110-1335 USA
23 
24 *****************************************************************************/
25 
26 /**************************************************/
33 #include "sync0sync.h"
34 #ifdef UNIV_NONINL
35 #include "sync0sync.ic"
36 #endif
37 
38 #include "sync0rw.h"
39 #include "buf0buf.h"
40 #include "srv0srv.h"
41 #include "buf0types.h"
42 #include "os0sync.h" /* for HAVE_ATOMIC_BUILTINS */
43 #ifdef UNIV_SYNC_DEBUG
44 # include "srv0start.h" /* srv_is_being_started */
45 #endif /* UNIV_SYNC_DEBUG */
46 #include "ha_prototypes.h"
47 
48 /*
49  REASONS FOR IMPLEMENTING THE SPIN LOCK MUTEX
50  ============================================
51 
52 Semaphore operations in operating systems are slow: Solaris on a 1993 Sparc
53 takes 3 microseconds (us) for a lock-unlock pair and Windows NT on a 1995
54 Pentium takes 20 microseconds for a lock-unlock pair. Therefore, we have to
55 implement our own efficient spin lock mutex. Future operating systems may
56 provide efficient spin locks, but we cannot count on that.
57 
58 Another reason for implementing a spin lock is that on multiprocessor systems
59 it can be more efficient for a processor to run a loop waiting for the
60 semaphore to be released than to switch to a different thread. A thread switch
61 takes 25 us on both platforms mentioned above. See Gray and Reuter's book
62 Transaction processing for background.
63 
64 How long should the spin loop last before suspending the thread? On a
65 uniprocessor, spinning does not help at all, because if the thread owning the
66 mutex is not executing, it cannot be released. Spinning actually wastes
67 resources.
68 
69 On a multiprocessor, we do not know if the thread owning the mutex is
70 executing or not. Thus it would make sense to spin as long as the operation
71 guarded by the mutex would typically last assuming that the thread is
72 executing. If the mutex is not released by that time, we may assume that the
73 thread owning the mutex is not executing and suspend the waiting thread.
74 
75 A typical operation (where no i/o involved) guarded by a mutex or a read-write
76 lock may last 1 - 20 us on the current Pentium platform. The longest
77 operations are the binary searches on an index node.
78 
79 We conclude that the best choice is to set the spin time at 20 us. Then the
80 system should work well on a multiprocessor. On a uniprocessor we have to
81 make sure that thread swithches due to mutex collisions are not frequent,
82 i.e., they do not happen every 100 us or so, because that wastes too much
83 resources. If the thread switches are not frequent, the 20 us wasted in spin
84 loop is not too much.
85 
86 Empirical studies on the effect of spin time should be done for different
87 platforms.
88 
89 
90  IMPLEMENTATION OF THE MUTEX
91  ===========================
92 
93 For background, see Curt Schimmel's book on Unix implementation on modern
94 architectures. The key points in the implementation are atomicity and
95 serialization of memory accesses. The test-and-set instruction (XCHG in
96 Pentium) must be atomic. As new processors may have weak memory models, also
97 serialization of memory references may be necessary. The successor of Pentium,
98 P6, has at least one mode where the memory model is weak. As far as we know,
99 in Pentium all memory accesses are serialized in the program order and we do
100 not have to worry about the memory model. On other processors there are
101 special machine instructions called a fence, memory barrier, or storage
102 barrier (STBAR in Sparc), which can be used to serialize the memory accesses
103 to happen in program order relative to the fence instruction.
104 
105 Leslie Lamport has devised a "bakery algorithm" to implement a mutex without
106 the atomic test-and-set, but his algorithm should be modified for weak memory
107 models. We do not use Lamport's algorithm, because we guess it is slower than
108 the atomic test-and-set.
109 
110 Our mutex implementation works as follows: After that we perform the atomic
111 test-and-set instruction on the memory word. If the test returns zero, we
112 know we got the lock first. If the test returns not zero, some other thread
113 was quicker and got the lock: then we spin in a loop reading the memory word,
114 waiting it to become zero. It is wise to just read the word in the loop, not
115 perform numerous test-and-set instructions, because they generate memory
116 traffic between the cache and the main memory. The read loop can just access
117 the cache, saving bus bandwidth.
118 
119 If we cannot acquire the mutex lock in the specified time, we reserve a cell
120 in the wait array, set the waiters byte in the mutex to 1. To avoid a race
121 condition, after setting the waiters byte and before suspending the waiting
122 thread, we still have to check that the mutex is reserved, because it may
123 have happened that the thread which was holding the mutex has just released
124 it and did not see the waiters byte set to 1, a case which would lead the
125 other thread to an infinite wait.
126 
127 LEMMA 1: After a thread resets the event of a mutex (or rw_lock), some
128 =======
129 thread will eventually call os_event_set() on that particular event.
130 Thus no infinite wait is possible in this case.
131 
132 Proof: After making the reservation the thread sets the waiters field in the
133 mutex to 1. Then it checks that the mutex is still reserved by some thread,
134 or it reserves the mutex for itself. In any case, some thread (which may be
135 also some earlier thread, not necessarily the one currently holding the mutex)
136 will set the waiters field to 0 in mutex_exit, and then call
137 os_event_set() with the mutex as an argument.
138 Q.E.D.
139 
140 LEMMA 2: If an os_event_set() call is made after some thread has called
141 =======
142 the os_event_reset() and before it starts wait on that event, the call
143 will not be lost to the second thread. This is true even if there is an
144 intervening call to os_event_reset() by another thread.
145 Thus no infinite wait is possible in this case.
146 
147 Proof (non-windows platforms): os_event_reset() returns a monotonically
148 increasing value of signal_count. This value is increased at every
149 call of os_event_set() If thread A has called os_event_reset() followed
150 by thread B calling os_event_set() and then some other thread C calling
151 os_event_reset(), the is_set flag of the event will be set to FALSE;
152 but now if thread A calls os_event_wait_low() with the signal_count
153 value returned from the earlier call of os_event_reset(), it will
154 return immediately without waiting.
155 Q.E.D.
156 
157 Proof (windows): If there is a writer thread which is forced to wait for
158 the lock, it may be able to set the state of rw_lock to RW_LOCK_WAIT_EX
159 The design of rw_lock ensures that there is one and only one thread
160 that is able to change the state to RW_LOCK_WAIT_EX and this thread is
161 guaranteed to acquire the lock after it is released by the current
162 holders and before any other waiter gets the lock.
163 On windows this thread waits on a separate event i.e.: wait_ex_event.
164 Since only one thread can wait on this event there is no chance
165 of this event getting reset before the writer starts wait on it.
166 Therefore, this thread is guaranteed to catch the os_set_event()
167 signalled unconditionally at the release of the lock.
168 Q.E.D. */
169 
170 /* Number of spin waits on mutexes: for performance monitoring */
171 
174 static ib_counter_t<ib_int64_t, IB_N_SLOTS> mutex_spin_round_count;
177 static ib_counter_t<ib_int64_t, IB_N_SLOTS> mutex_spin_wait_count;
180 static ib_counter_t<ib_int64_t, IB_N_SLOTS> mutex_os_wait_count;
183 UNIV_INTERN ib_int64_t mutex_exit_count;
184 
186 UNIV_INTERN ibool sync_initialized = FALSE;
187 
188 #ifdef UNIV_SYNC_DEBUG
189 
190 struct sync_level_t;
192 struct sync_thread_t;
193 
197 UNIV_INTERN sync_thread_t* sync_thread_level_arrays;
198 
200 UNIV_INTERN ib_mutex_t sync_thread_mutex;
201 
202 # ifdef UNIV_PFS_MUTEX
203 UNIV_INTERN mysql_pfs_key_t sync_thread_mutex_key;
204 # endif /* UNIV_PFS_MUTEX */
205 #endif /* UNIV_SYNC_DEBUG */
206 
208 UNIV_INTERN ut_list_base_node_t mutex_list;
209 
212 
213 #ifdef UNIV_PFS_MUTEX
214 UNIV_INTERN mysql_pfs_key_t mutex_list_mutex_key;
215 #endif /* UNIV_PFS_MUTEX */
216 
217 #ifdef UNIV_SYNC_DEBUG
218 
219 UNIV_INTERN ibool sync_order_checks_on = FALSE;
220 
222 static const ulint SYNC_THREAD_N_LEVELS = 10000;
223 
225 struct sync_arr_t {
226  ulint in_use;
227  ulint n_elems;
228  ulint max_elems;
229  ulint next_free;
231  sync_level_t* elems;
232 };
233 
235 struct sync_thread_t{
237  sync_arr_t* levels;
239 };
240 
242 struct sync_level_t{
243  void* latch;
246  ulint level;
253 };
254 #endif /* UNIV_SYNC_DEBUG */
255 
256 /******************************************************************/
261 UNIV_INTERN
262 void
264 /*==============*/
265  ib_mutex_t* mutex,
266 #ifdef UNIV_DEBUG
267  const char* cmutex_name,
268 # ifdef UNIV_SYNC_DEBUG
269  ulint level,
270 # endif /* UNIV_SYNC_DEBUG */
271 #endif /* UNIV_DEBUG */
272  const char* cfile_name,
273  ulint cline)
274 {
275 #if defined(HAVE_ATOMIC_BUILTINS)
276  mutex_reset_lock_word(mutex);
277 #else
278  os_fast_mutex_init(PFS_NOT_INSTRUMENTED, &mutex->os_fast_mutex);
279  mutex->lock_word = 0;
280 #endif
281  mutex->event = os_event_create();
282  mutex_set_waiters(mutex, 0);
283 #ifdef UNIV_DEBUG
284  mutex->magic_n = MUTEX_MAGIC_N;
285 #endif /* UNIV_DEBUG */
286 #ifdef UNIV_SYNC_DEBUG
287  mutex->line = 0;
288  mutex->file_name = "not yet reserved";
289  mutex->level = level;
290 #endif /* UNIV_SYNC_DEBUG */
291  mutex->cfile_name = cfile_name;
292  mutex->cline = cline;
293  mutex->count_os_wait = 0;
294 
295  /* Check that lock_word is aligned; this is important on Intel */
296  ut_ad(((ulint)(&(mutex->lock_word))) % 4 == 0);
297 
298  /* NOTE! The very first mutexes are not put to the mutex list */
299 
300  if ((mutex == &mutex_list_mutex)
301 #ifdef UNIV_SYNC_DEBUG
302  || (mutex == &sync_thread_mutex)
303 #endif /* UNIV_SYNC_DEBUG */
304  ) {
305 
306  return;
307  }
308 
309  mutex_enter(&mutex_list_mutex);
310 
312  || UT_LIST_GET_FIRST(mutex_list)->magic_n == MUTEX_MAGIC_N);
313 
314  UT_LIST_ADD_FIRST(list, mutex_list, mutex);
315 
316  mutex_exit(&mutex_list_mutex);
317 }
318 
319 /******************************************************************/
324 UNIV_INTERN
325 void
327 /*============*/
328  ib_mutex_t* mutex)
329 {
330  ut_ad(mutex_validate(mutex));
331  ut_a(mutex_get_lock_word(mutex) == 0);
332  ut_a(mutex_get_waiters(mutex) == 0);
333 
334 #ifdef UNIV_MEM_DEBUG
335  if (mutex == &mem_hash_mutex) {
337  ut_ad(UT_LIST_GET_FIRST(mutex_list) == &mem_hash_mutex);
338  UT_LIST_REMOVE(list, mutex_list, mutex);
339  goto func_exit;
340  }
341 #endif /* UNIV_MEM_DEBUG */
342 
343  if (mutex != &mutex_list_mutex
344 #ifdef UNIV_SYNC_DEBUG
345  && mutex != &sync_thread_mutex
346 #endif /* UNIV_SYNC_DEBUG */
347  ) {
348 
349  mutex_enter(&mutex_list_mutex);
350 
351  ut_ad(!UT_LIST_GET_PREV(list, mutex)
352  || UT_LIST_GET_PREV(list, mutex)->magic_n
353  == MUTEX_MAGIC_N);
354  ut_ad(!UT_LIST_GET_NEXT(list, mutex)
355  || UT_LIST_GET_NEXT(list, mutex)->magic_n
356  == MUTEX_MAGIC_N);
357 
358  UT_LIST_REMOVE(list, mutex_list, mutex);
359 
360  mutex_exit(&mutex_list_mutex);
361  }
362 
363  os_event_free(mutex->event);
364 #ifdef UNIV_MEM_DEBUG
365 func_exit:
366 #endif /* UNIV_MEM_DEBUG */
367 #if !defined(HAVE_ATOMIC_BUILTINS)
368  os_fast_mutex_free(&(mutex->os_fast_mutex));
369 #endif
370  /* If we free the mutex protecting the mutex list (freeing is
371  not necessary), we have to reset the magic number AFTER removing
372  it from the list. */
373 #ifdef UNIV_DEBUG
374  mutex->magic_n = 0;
375 #endif /* UNIV_DEBUG */
376  return;
377 }
378 
379 /********************************************************************/
384 UNIV_INTERN
385 ulint
387 /*====================*/
388  ib_mutex_t* mutex,
389  const char* file_name __attribute__((unused)),
392  ulint line __attribute__((unused)))
394 {
395  ut_ad(mutex_validate(mutex));
396 
397  if (!ib_mutex_test_and_set(mutex)) {
398 
399  ut_d(mutex->thread_id = os_thread_get_curr_id());
400 #ifdef UNIV_SYNC_DEBUG
401  mutex_set_debug_info(mutex, file_name, line);
402 #endif
403 
404  return(0); /* Succeeded! */
405  }
406 
407  return(1);
408 }
409 
410 #ifdef UNIV_DEBUG
411 /******************************************************************/
414 UNIV_INTERN
415 ibool
416 mutex_validate(
417 /*===========*/
418  const ib_mutex_t* mutex)
419 {
420  ut_a(mutex);
421  ut_a(mutex->magic_n == MUTEX_MAGIC_N);
422 
423  return(TRUE);
424 }
425 
426 /******************************************************************/
430 UNIV_INTERN
431 ibool
432 mutex_own(
433 /*======*/
434  const ib_mutex_t* mutex)
435 {
436  ut_ad(mutex_validate(mutex));
437 
438  return(mutex_get_lock_word(mutex) == 1
439  && os_thread_eq(mutex->thread_id, os_thread_get_curr_id()));
440 }
441 #endif /* UNIV_DEBUG */
442 
443 /******************************************************************/
445 UNIV_INTERN
446 void
448 /*==============*/
449  ib_mutex_t* mutex,
450  ulint n)
451 {
452  volatile ulint* ptr; /* declared volatile to ensure that
453  the value is stored to memory */
454  ut_ad(mutex);
455 
456  ptr = &(mutex->waiters);
457 
458  *ptr = n; /* Here we assume that the write of a single
459  word in memory is atomic */
460 }
461 
462 /******************************************************************/
466 UNIV_INTERN
467 void
469 /*============*/
470  ib_mutex_t* mutex,
471  const char* file_name,
473  ulint line)
474 {
475  ulint i; /* spin round count */
476  ulint index; /* index of the reserved wait cell */
477  sync_array_t* sync_arr;
478  size_t counter_index;
479 
480  counter_index = (size_t) os_thread_get_curr_id();
481 
482  ut_ad(mutex);
483 
484  /* This update is not thread safe, but we don't mind if the count
485  isn't exact. Moved out of ifdef that follows because we are willing
486  to sacrifice the cost of counting this as the data is valuable.
487  Count the number of calls to mutex_spin_wait. */
488  mutex_spin_wait_count.add(counter_index, 1);
489 
490 mutex_loop:
491 
492  i = 0;
493 
494  /* Spin waiting for the lock word to become zero. Note that we do
495  not have to assume that the read access to the lock word is atomic,
496  as the actual locking is always committed with atomic test-and-set.
497  In reality, however, all processors probably have an atomic read of
498  a memory word. */
499 
500 spin_loop:
501 
502  while (mutex_get_lock_word(mutex) != 0 && i < SYNC_SPIN_ROUNDS) {
503  if (srv_spin_wait_delay) {
504  ut_delay(ut_rnd_interval(0, srv_spin_wait_delay));
505  }
506 
507  i++;
508  }
509 
510  if (i == SYNC_SPIN_ROUNDS) {
511  os_thread_yield();
512  }
513 
514  mutex_spin_round_count.add(counter_index, i);
515 
516  if (ib_mutex_test_and_set(mutex) == 0) {
517  /* Succeeded! */
518 
519  ut_d(mutex->thread_id = os_thread_get_curr_id());
520 #ifdef UNIV_SYNC_DEBUG
521  mutex_set_debug_info(mutex, file_name, line);
522 #endif
523  return;
524  }
525 
526  /* We may end up with a situation where lock_word is 0 but the OS
527  fast mutex is still reserved. On FreeBSD the OS does not seem to
528  schedule a thread which is constantly calling pthread_mutex_trylock
529  (in ib_mutex_test_and_set implementation). Then we could end up
530  spinning here indefinitely. The following 'i++' stops this infinite
531  spin. */
532 
533  i++;
534 
535  if (i < SYNC_SPIN_ROUNDS) {
536  goto spin_loop;
537  }
538 
539  sync_arr = sync_array_get();
540 
542  sync_arr, mutex, SYNC_MUTEX, file_name, line, &index);
543 
544  /* The memory order of the array reservation and the change in the
545  waiters field is important: when we suspend a thread, we first
546  reserve the cell and then set waiters field to 1. When threads are
547  released in mutex_exit, the waiters field is first set to zero and
548  then the event is set to the signaled state. */
549 
550  mutex_set_waiters(mutex, 1);
551 
552  /* Try to reserve still a few times */
553  for (i = 0; i < 4; i++) {
554  if (ib_mutex_test_and_set(mutex) == 0) {
555  /* Succeeded! Free the reserved wait cell */
556 
557  sync_array_free_cell(sync_arr, index);
558 
559  ut_d(mutex->thread_id = os_thread_get_curr_id());
560 #ifdef UNIV_SYNC_DEBUG
561  mutex_set_debug_info(mutex, file_name, line);
562 #endif
563 
564  return;
565 
566  /* Note that in this case we leave the waiters field
567  set to 1. We cannot reset it to zero, as we do not
568  know if there are other waiters. */
569  }
570  }
571 
572  /* Now we know that there has been some thread holding the mutex
573  after the change in the wait array and the waiters field was made.
574  Now there is no risk of infinite wait on the event. */
575 
576  mutex_os_wait_count.add(counter_index, 1);
577 
578  mutex->count_os_wait++;
579 
580  sync_array_wait_event(sync_arr, index);
581 
582  goto mutex_loop;
583 }
584 
585 /******************************************************************/
587 UNIV_INTERN
588 void
590 /*================*/
591  ib_mutex_t* mutex)
592 {
593  mutex_set_waiters(mutex, 0);
594 
595  /* The memory order of resetting the waiters field and
596  signaling the object is important. See LEMMA 1 above. */
597  os_event_set(mutex->event);
599 }
600 
601 #ifdef UNIV_SYNC_DEBUG
602 /******************************************************************/
604 UNIV_INTERN
605 void
606 mutex_set_debug_info(
607 /*=================*/
608  ib_mutex_t* mutex,
609  const char* file_name,
610  ulint line)
611 {
612  ut_ad(mutex);
613  ut_ad(file_name);
614 
615  sync_thread_add_level(mutex, mutex->level, FALSE);
616 
617  mutex->file_name = file_name;
618  mutex->line = line;
619 }
620 
621 /******************************************************************/
623 UNIV_INTERN
624 void
625 mutex_get_debug_info(
626 /*=================*/
627  ib_mutex_t* mutex,
628  const char** file_name,
629  ulint* line,
630  os_thread_id_t* thread_id)
632 {
633  ut_ad(mutex);
634 
635  *file_name = mutex->file_name;
636  *line = mutex->line;
637  *thread_id = mutex->thread_id;
638 }
639 
640 /******************************************************************/
642 static
643 void
644 mutex_list_print_info(
645 /*==================*/
646  FILE* file)
647 {
648  ib_mutex_t* mutex;
649  const char* file_name;
650  ulint line;
651  os_thread_id_t thread_id;
652  ulint count = 0;
653 
654  fputs("----------\n"
655  "MUTEX INFO\n"
656  "----------\n", file);
657 
658  mutex_enter(&mutex_list_mutex);
659 
660  mutex = UT_LIST_GET_FIRST(mutex_list);
661 
662  while (mutex != NULL) {
663  count++;
664 
665  if (mutex_get_lock_word(mutex) != 0) {
666  mutex_get_debug_info(mutex, &file_name, &line,
667  &thread_id);
668  fprintf(file,
669  "Locked mutex: addr %p thread %ld"
670  " file %s line %ld\n",
671  (void*) mutex, os_thread_pf(thread_id),
672  file_name, line);
673  }
674 
675  mutex = UT_LIST_GET_NEXT(list, mutex);
676  }
677 
678  fprintf(file, "Total number of mutexes %ld\n", count);
679 
680  mutex_exit(&mutex_list_mutex);
681 }
682 
683 /******************************************************************/
686 UNIV_INTERN
687 ulint
688 mutex_n_reserved(void)
689 /*==================*/
690 {
691  ib_mutex_t* mutex;
692  ulint count = 0;
693 
694  mutex_enter(&mutex_list_mutex);
695 
696  for (mutex = UT_LIST_GET_FIRST(mutex_list);
697  mutex != NULL;
698  mutex = UT_LIST_GET_NEXT(list, mutex)) {
699 
700  if (mutex_get_lock_word(mutex) != 0) {
701 
702  count++;
703  }
704  }
705 
706  mutex_exit(&mutex_list_mutex);
707 
708  ut_a(count >= 1);
709 
710  /* Subtract one, because this function itself was holding
711  one mutex (mutex_list_mutex) */
712 
713  return(count - 1);
714 }
715 
716 /******************************************************************/
720 UNIV_INTERN
721 ibool
722 sync_all_freed(void)
723 /*================*/
724 {
725  return(mutex_n_reserved() + rw_lock_n_locked() == 0);
726 }
727 
728 /******************************************************************/
731 static
732 sync_thread_t*
733 sync_thread_level_arrays_find_slot(void)
734 /*====================================*/
735 
736 {
737  ulint i;
739 
740  id = os_thread_get_curr_id();
741 
742  for (i = 0; i < OS_THREAD_MAX_N; i++) {
743  sync_thread_t* slot;
744 
745  slot = &sync_thread_level_arrays[i];
746 
747  if (slot->levels && os_thread_eq(slot->id, id)) {
748 
749  return(slot);
750  }
751  }
752 
753  return(NULL);
754 }
755 
756 /******************************************************************/
759 static
760 sync_thread_t*
761 sync_thread_level_arrays_find_free(void)
762 /*====================================*/
763 
764 {
765  ulint i;
766 
767  for (i = 0; i < OS_THREAD_MAX_N; i++) {
768  sync_thread_t* slot;
769 
770  slot = &sync_thread_level_arrays[i];
771 
772  if (slot->levels == NULL) {
773 
774  return(slot);
775  }
776  }
777 
778  return(NULL);
779 }
780 
781 /******************************************************************/
783 static
784 void
785 sync_print_warning(
786 /*===============*/
787  const sync_level_t* slot)
789 {
790  ib_mutex_t* mutex;
791 
792  mutex = static_cast<ib_mutex_t*>(slot->latch);
793 
794  if (mutex->magic_n == MUTEX_MAGIC_N) {
795  fprintf(stderr,
796  "Mutex created at %s %lu\n",
798  (ulong) mutex->cline);
799 
800  if (mutex_get_lock_word(mutex) != 0) {
801  ulint line;
802  const char* file_name;
803  os_thread_id_t thread_id;
804 
805  mutex_get_debug_info(
806  mutex, &file_name, &line, &thread_id);
807 
808  fprintf(stderr,
809  "InnoDB: Locked mutex:"
810  " addr %p thread %ld file %s line %ld\n",
811  (void*) mutex, os_thread_pf(thread_id),
812  file_name, (ulong) line);
813  } else {
814  fputs("Not locked\n", stderr);
815  }
816  } else {
817  rw_lock_t* lock;
818 
819  lock = static_cast<rw_lock_t*>(slot->latch);
820 
821  rw_lock_print(lock);
822  }
823 }
824 
825 /******************************************************************/
829 static
830 ibool
831 sync_thread_levels_g(
832 /*=================*/
833  sync_arr_t* arr,
835  ulint limit,
836  ulint warn)
837 {
838  ulint i;
839 
840  for (i = 0; i < arr->n_elems; i++) {
841  const sync_level_t* slot;
842 
843  slot = &arr->elems[i];
844 
845  if (slot->latch != NULL && slot->level <= limit) {
846  if (warn) {
847  fprintf(stderr,
848  "InnoDB: sync levels should be"
849  " > %lu but a level is %lu\n",
850  (ulong) limit, (ulong) slot->level);
851 
852  sync_print_warning(slot);
853  }
854 
855  return(FALSE);
856  }
857  }
858 
859  return(TRUE);
860 }
861 
862 /******************************************************************/
865 static
866 const sync_level_t*
867 sync_thread_levels_contain(
868 /*=======================*/
869  sync_arr_t* arr,
871  ulint level)
872 {
873  ulint i;
874 
875  for (i = 0; i < arr->n_elems; i++) {
876  const sync_level_t* slot;
877 
878  slot = &arr->elems[i];
879 
880  if (slot->latch != NULL && slot->level == level) {
881 
882  return(slot);
883  }
884  }
885 
886  return(NULL);
887 }
888 
889 /******************************************************************/
893 UNIV_INTERN
894 void*
895 sync_thread_levels_contains(
896 /*========================*/
897  ulint level)
899 {
900  ulint i;
901  sync_arr_t* arr;
902  sync_thread_t* thread_slot;
903 
904  if (!sync_order_checks_on) {
905 
906  return(NULL);
907  }
908 
909  mutex_enter(&sync_thread_mutex);
910 
911  thread_slot = sync_thread_level_arrays_find_slot();
912 
913  if (thread_slot == NULL) {
914 
915  mutex_exit(&sync_thread_mutex);
916 
917  return(NULL);
918  }
919 
920  arr = thread_slot->levels;
921 
922  for (i = 0; i < arr->n_elems; i++) {
923  sync_level_t* slot;
924 
925  slot = &arr->elems[i];
926 
927  if (slot->latch != NULL && slot->level == level) {
928 
929  mutex_exit(&sync_thread_mutex);
930  return(slot->latch);
931  }
932  }
933 
934  mutex_exit(&sync_thread_mutex);
935 
936  return(NULL);
937 }
938 
939 /******************************************************************/
942 UNIV_INTERN
943 void*
944 sync_thread_levels_nonempty_gen(
945 /*============================*/
946  ibool dict_mutex_allowed)
948 {
949  ulint i;
950  sync_arr_t* arr;
951  sync_thread_t* thread_slot;
952 
953  if (!sync_order_checks_on) {
954 
955  return(NULL);
956  }
957 
958  mutex_enter(&sync_thread_mutex);
959 
960  thread_slot = sync_thread_level_arrays_find_slot();
961 
962  if (thread_slot == NULL) {
963 
964  mutex_exit(&sync_thread_mutex);
965 
966  return(NULL);
967  }
968 
969  arr = thread_slot->levels;
970 
971  for (i = 0; i < arr->n_elems; ++i) {
972  const sync_level_t* slot;
973 
974  slot = &arr->elems[i];
975 
976  if (slot->latch != NULL
977  && (!dict_mutex_allowed
978  || (slot->level != SYNC_DICT
979  && slot->level != SYNC_DICT_OPERATION
980  && slot->level != SYNC_FTS_CACHE))) {
981 
982  mutex_exit(&sync_thread_mutex);
983  ut_error;
984 
985  return(slot->latch);
986  }
987  }
988 
989  mutex_exit(&sync_thread_mutex);
990 
991  return(NULL);
992 }
993 
994 /******************************************************************/
998 UNIV_INTERN
999 void*
1000 sync_thread_levels_nonempty_trx(
1001 /*============================*/
1002  ibool has_search_latch)
1005 {
1006  ulint i;
1007  sync_arr_t* arr;
1008  sync_thread_t* thread_slot;
1009 
1010  if (!sync_order_checks_on) {
1011 
1012  return(NULL);
1013  }
1014 
1015  ut_a(!has_search_latch
1016  || sync_thread_levels_contains(SYNC_SEARCH_SYS));
1017 
1018  mutex_enter(&sync_thread_mutex);
1019 
1020  thread_slot = sync_thread_level_arrays_find_slot();
1021 
1022  if (thread_slot == NULL) {
1023 
1024  mutex_exit(&sync_thread_mutex);
1025 
1026  return(NULL);
1027  }
1028 
1029  arr = thread_slot->levels;
1030 
1031  for (i = 0; i < arr->n_elems; ++i) {
1032  const sync_level_t* slot;
1033 
1034  slot = &arr->elems[i];
1035 
1036  if (slot->latch != NULL
1037  && (!has_search_latch
1038  || slot->level != SYNC_SEARCH_SYS)) {
1039 
1040  mutex_exit(&sync_thread_mutex);
1041  ut_error;
1042 
1043  return(slot->latch);
1044  }
1045  }
1046 
1047  mutex_exit(&sync_thread_mutex);
1048 
1049  return(NULL);
1050 }
1051 
1052 /******************************************************************/
1056 UNIV_INTERN
1057 void
1058 sync_thread_add_level(
1059 /*==================*/
1060  void* latch,
1061  ulint level,
1063  ibool relock)
1064 {
1065  ulint i;
1066  sync_level_t* slot;
1067  sync_arr_t* array;
1068  sync_thread_t* thread_slot;
1069 
1070  if (!sync_order_checks_on) {
1071 
1072  return;
1073  }
1074 
1075  if ((latch == (void*) &sync_thread_mutex)
1076  || (latch == (void*) &mutex_list_mutex)
1077  || (latch == (void*) &rw_lock_debug_mutex)
1078  || (latch == (void*) &rw_lock_list_mutex)) {
1079 
1080  return;
1081  }
1082 
1083  if (level == SYNC_LEVEL_VARYING) {
1084 
1085  return;
1086  }
1087 
1088  mutex_enter(&sync_thread_mutex);
1089 
1090  thread_slot = sync_thread_level_arrays_find_slot();
1091 
1092  if (thread_slot == NULL) {
1093  ulint sz;
1094 
1095  sz = sizeof(*array)
1096  + (sizeof(*array->elems) * SYNC_THREAD_N_LEVELS);
1097 
1098  /* We have to allocate the level array for a new thread */
1099  array = static_cast<sync_arr_t*>(calloc(sz, sizeof(char)));
1100  ut_a(array != NULL);
1101 
1102  array->next_free = ULINT_UNDEFINED;
1103  array->max_elems = SYNC_THREAD_N_LEVELS;
1104  array->elems = (sync_level_t*) &array[1];
1105 
1106  thread_slot = sync_thread_level_arrays_find_free();
1107 
1108  thread_slot->levels = array;
1109  thread_slot->id = os_thread_get_curr_id();
1110  }
1111 
1112  array = thread_slot->levels;
1113 
1114  if (relock) {
1115  goto levels_ok;
1116  }
1117 
1118  /* NOTE that there is a problem with _NODE and _LEAF levels: if the
1119  B-tree height changes, then a leaf can change to an internal node
1120  or the other way around. We do not know at present if this can cause
1121  unnecessary assertion failures below. */
1122 
1123  switch (level) {
1124  case SYNC_NO_ORDER_CHECK:
1125  case SYNC_EXTERN_STORAGE:
1126  case SYNC_TREE_NODE_FROM_HASH:
1127  /* Do no order checking */
1128  break;
1129  case SYNC_TRX_SYS_HEADER:
1130  if (srv_is_being_started) {
1131  /* This is violated during trx_sys_create_rsegs()
1132  when creating additional rollback segments when
1133  upgrading in innobase_start_or_create_for_mysql(). */
1134  break;
1135  }
1136  case SYNC_MEM_POOL:
1137  case SYNC_MEM_HASH:
1138  case SYNC_RECV:
1139  case SYNC_FTS_BG_THREADS:
1140  case SYNC_WORK_QUEUE:
1141  case SYNC_FTS_OPTIMIZE:
1142  case SYNC_FTS_CACHE:
1143  case SYNC_FTS_CACHE_INIT:
1144  case SYNC_LOG:
1145  case SYNC_LOG_FLUSH_ORDER:
1146  case SYNC_ANY_LATCH:
1147  case SYNC_FILE_FORMAT_TAG:
1148  case SYNC_DOUBLEWRITE:
1149  case SYNC_SEARCH_SYS:
1150  case SYNC_THREADS:
1151  case SYNC_LOCK_SYS:
1152  case SYNC_LOCK_WAIT_SYS:
1153  case SYNC_TRX_SYS:
1154  case SYNC_IBUF_BITMAP_MUTEX:
1155  case SYNC_RSEG:
1156  case SYNC_TRX_UNDO:
1157  case SYNC_PURGE_LATCH:
1158  case SYNC_PURGE_QUEUE:
1159  case SYNC_DICT_AUTOINC_MUTEX:
1160  case SYNC_DICT_OPERATION:
1161  case SYNC_DICT_HEADER:
1162  case SYNC_TRX_I_S_RWLOCK:
1163  case SYNC_TRX_I_S_LAST_READ:
1164  case SYNC_IBUF_MUTEX:
1165  case SYNC_INDEX_ONLINE_LOG:
1166  case SYNC_STATS_AUTO_RECALC:
1167  if (!sync_thread_levels_g(array, level, TRUE)) {
1168  fprintf(stderr,
1169  "InnoDB: sync_thread_levels_g(array, %lu)"
1170  " does not hold!\n", level);
1171  ut_error;
1172  }
1173  break;
1174  case SYNC_TRX:
1175  /* Either the thread must own the lock_sys->mutex, or
1176  it is allowed to own only ONE trx->mutex. */
1177  if (!sync_thread_levels_g(array, level, FALSE)) {
1178  ut_a(sync_thread_levels_g(array, level - 1, TRUE));
1179  ut_a(sync_thread_levels_contain(array, SYNC_LOCK_SYS));
1180  }
1181  break;
1182  case SYNC_BUF_FLUSH_LIST:
1183  case SYNC_BUF_POOL:
1184  /* We can have multiple mutexes of this type therefore we
1185  can only check whether the greater than condition holds. */
1186  if (!sync_thread_levels_g(array, level-1, TRUE)) {
1187  fprintf(stderr,
1188  "InnoDB: sync_thread_levels_g(array, %lu)"
1189  " does not hold!\n", level-1);
1190  ut_error;
1191  }
1192  break;
1193 
1194 
1195  case SYNC_BUF_PAGE_HASH:
1196  /* Multiple page_hash locks are only allowed during
1197  buf_validate and that is where buf_pool mutex is already
1198  held. */
1199  /* Fall through */
1200 
1201  case SYNC_BUF_BLOCK:
1202  /* Either the thread must own the buffer pool mutex
1203  (buf_pool->mutex), or it is allowed to latch only ONE
1204  buffer block (block->mutex or buf_pool->zip_mutex). */
1205  if (!sync_thread_levels_g(array, level, FALSE)) {
1206  ut_a(sync_thread_levels_g(array, level - 1, TRUE));
1207  ut_a(sync_thread_levels_contain(array, SYNC_BUF_POOL));
1208  }
1209  break;
1210  case SYNC_REC_LOCK:
1211  if (sync_thread_levels_contain(array, SYNC_LOCK_SYS)) {
1212  ut_a(sync_thread_levels_g(array, SYNC_REC_LOCK - 1,
1213  TRUE));
1214  } else {
1215  ut_a(sync_thread_levels_g(array, SYNC_REC_LOCK, TRUE));
1216  }
1217  break;
1218  case SYNC_IBUF_BITMAP:
1219  /* Either the thread must own the master mutex to all
1220  the bitmap pages, or it is allowed to latch only ONE
1221  bitmap page. */
1222  if (sync_thread_levels_contain(array,
1223  SYNC_IBUF_BITMAP_MUTEX)) {
1224  ut_a(sync_thread_levels_g(array, SYNC_IBUF_BITMAP - 1,
1225  TRUE));
1226  } else {
1227  /* This is violated during trx_sys_create_rsegs()
1228  when creating additional rollback segments when
1229  upgrading in innobase_start_or_create_for_mysql(). */
1231  || sync_thread_levels_g(array, SYNC_IBUF_BITMAP,
1232  TRUE));
1233  }
1234  break;
1235  case SYNC_FSP_PAGE:
1236  ut_a(sync_thread_levels_contain(array, SYNC_FSP));
1237  break;
1238  case SYNC_FSP:
1239  ut_a(sync_thread_levels_contain(array, SYNC_FSP)
1240  || sync_thread_levels_g(array, SYNC_FSP, TRUE));
1241  break;
1242  case SYNC_TRX_UNDO_PAGE:
1243  /* Purge is allowed to read in as many UNDO pages as it likes,
1244  there was a bogus rule here earlier that forced the caller to
1245  acquire the purge_sys_t::mutex. The purge mutex did not really
1246  protect anything because it was only ever acquired by the
1247  single purge thread. The purge thread can read the UNDO pages
1248  without any covering mutex. */
1249 
1250  ut_a(sync_thread_levels_contain(array, SYNC_TRX_UNDO)
1251  || sync_thread_levels_contain(array, SYNC_RSEG)
1252  || sync_thread_levels_g(array, level - 1, TRUE));
1253  break;
1254  case SYNC_RSEG_HEADER:
1255  ut_a(sync_thread_levels_contain(array, SYNC_RSEG));
1256  break;
1257  case SYNC_RSEG_HEADER_NEW:
1258  ut_a(sync_thread_levels_contain(array, SYNC_FSP_PAGE));
1259  break;
1260  case SYNC_TREE_NODE:
1261  ut_a(sync_thread_levels_contain(array, SYNC_INDEX_TREE)
1262  || sync_thread_levels_contain(array, SYNC_DICT_OPERATION)
1263  || sync_thread_levels_g(array, SYNC_TREE_NODE - 1, TRUE));
1264  break;
1265  case SYNC_TREE_NODE_NEW:
1266  ut_a(sync_thread_levels_contain(array, SYNC_FSP_PAGE));
1267  break;
1268  case SYNC_INDEX_TREE:
1269  ut_a(sync_thread_levels_g(array, SYNC_TREE_NODE - 1, TRUE));
1270  break;
1271  case SYNC_IBUF_TREE_NODE:
1272  ut_a(sync_thread_levels_contain(array, SYNC_IBUF_INDEX_TREE)
1273  || sync_thread_levels_g(array, SYNC_IBUF_TREE_NODE - 1,
1274  TRUE));
1275  break;
1276  case SYNC_IBUF_TREE_NODE_NEW:
1277  /* ibuf_add_free_page() allocates new pages for the
1278  change buffer while only holding the tablespace
1279  x-latch. These pre-allocated new pages may only be
1280  taken in use while holding ibuf_mutex, in
1281  btr_page_alloc_for_ibuf(). */
1282  ut_a(sync_thread_levels_contain(array, SYNC_IBUF_MUTEX)
1283  || sync_thread_levels_contain(array, SYNC_FSP));
1284  break;
1285  case SYNC_IBUF_INDEX_TREE:
1286  if (sync_thread_levels_contain(array, SYNC_FSP)) {
1287  ut_a(sync_thread_levels_g(array, level - 1, TRUE));
1288  } else {
1289  ut_a(sync_thread_levels_g(
1290  array, SYNC_IBUF_TREE_NODE - 1, TRUE));
1291  }
1292  break;
1293  case SYNC_IBUF_PESS_INSERT_MUTEX:
1294  ut_a(sync_thread_levels_g(array, SYNC_FSP - 1, TRUE));
1295  ut_a(!sync_thread_levels_contain(array, SYNC_IBUF_MUTEX));
1296  break;
1297  case SYNC_IBUF_HEADER:
1298  ut_a(sync_thread_levels_g(array, SYNC_FSP - 1, TRUE));
1299  ut_a(!sync_thread_levels_contain(array, SYNC_IBUF_MUTEX));
1300  ut_a(!sync_thread_levels_contain(array,
1301  SYNC_IBUF_PESS_INSERT_MUTEX));
1302  break;
1303  case SYNC_DICT:
1304 #ifdef UNIV_DEBUG
1305  ut_a(buf_debug_prints
1306  || sync_thread_levels_g(array, SYNC_DICT, TRUE));
1307 #else /* UNIV_DEBUG */
1308  ut_a(sync_thread_levels_g(array, SYNC_DICT, TRUE));
1309 #endif /* UNIV_DEBUG */
1310  break;
1311  default:
1312  ut_error;
1313  }
1314 
1315 levels_ok:
1316  if (array->next_free == ULINT_UNDEFINED) {
1317  ut_a(array->n_elems < array->max_elems);
1318 
1319  i = array->n_elems++;
1320  } else {
1321  i = array->next_free;
1322  array->next_free = array->elems[i].level;
1323  }
1324 
1325  ut_a(i < array->n_elems);
1326  ut_a(i != ULINT_UNDEFINED);
1327 
1328  ++array->in_use;
1329 
1330  slot = &array->elems[i];
1331 
1332  ut_a(slot->latch == NULL);
1333 
1334  slot->latch = latch;
1335  slot->level = level;
1336 
1337  mutex_exit(&sync_thread_mutex);
1338 }
1339 
1340 /******************************************************************/
1345 UNIV_INTERN
1346 ibool
1347 sync_thread_reset_level(
1348 /*====================*/
1349  void* latch)
1350 {
1351  sync_arr_t* array;
1352  sync_thread_t* thread_slot;
1353  ulint i;
1354 
1355  if (!sync_order_checks_on) {
1356 
1357  return(FALSE);
1358  }
1359 
1360  if ((latch == (void*) &sync_thread_mutex)
1361  || (latch == (void*) &mutex_list_mutex)
1362  || (latch == (void*) &rw_lock_debug_mutex)
1363  || (latch == (void*) &rw_lock_list_mutex)) {
1364 
1365  return(FALSE);
1366  }
1367 
1368  mutex_enter(&sync_thread_mutex);
1369 
1370  thread_slot = sync_thread_level_arrays_find_slot();
1371 
1372  if (thread_slot == NULL) {
1373 
1374  ut_error;
1375 
1376  mutex_exit(&sync_thread_mutex);
1377  return(FALSE);
1378  }
1379 
1380  array = thread_slot->levels;
1381 
1382  for (i = 0; i < array->n_elems; i++) {
1383  sync_level_t* slot;
1384 
1385  slot = &array->elems[i];
1386 
1387  if (slot->latch != latch) {
1388  continue;
1389  }
1390 
1391  slot->latch = NULL;
1392 
1393  /* Update the free slot list. See comment in sync_level_t
1394  for the level field. */
1395  slot->level = array->next_free;
1396  array->next_free = i;
1397 
1398  ut_a(array->in_use >= 1);
1399  --array->in_use;
1400 
1401  /* If all cells are idle then reset the free
1402  list. The assumption is that this will save
1403  time when we need to scan up to n_elems. */
1404 
1405  if (array->in_use == 0) {
1406  array->n_elems = 0;
1407  array->next_free = ULINT_UNDEFINED;
1408  }
1409 
1410  mutex_exit(&sync_thread_mutex);
1411 
1412  return(TRUE);
1413  }
1414 
1415  if (((ib_mutex_t*) latch)->magic_n != MUTEX_MAGIC_N) {
1416  rw_lock_t* rw_lock;
1417 
1418  rw_lock = (rw_lock_t*) latch;
1419 
1420  if (rw_lock->level == SYNC_LEVEL_VARYING) {
1421  mutex_exit(&sync_thread_mutex);
1422 
1423  return(TRUE);
1424  }
1425  }
1426 
1427  ut_error;
1428 
1429  mutex_exit(&sync_thread_mutex);
1430 
1431  return(FALSE);
1432 }
1433 #endif /* UNIV_SYNC_DEBUG */
1434 
1435 /******************************************************************/
1437 UNIV_INTERN
1438 void
1440 /*===========*/
1441 {
1442  ut_a(sync_initialized == FALSE);
1443 
1444  sync_initialized = TRUE;
1445 
1446  sync_array_init(OS_THREAD_MAX_N);
1447 
1448 #ifdef UNIV_SYNC_DEBUG
1449  /* Create the thread latch level array where the latch levels
1450  are stored for each OS thread */
1451 
1452  sync_thread_level_arrays = static_cast<sync_thread_t*>(
1453  calloc(sizeof(sync_thread_t), OS_THREAD_MAX_N));
1454 
1455  ut_a(sync_thread_level_arrays != NULL);
1456 
1457 #endif /* UNIV_SYNC_DEBUG */
1458  /* Init the mutex list and create the mutex to protect it. */
1459 
1461  mutex_create(mutex_list_mutex_key, &mutex_list_mutex,
1462  SYNC_NO_ORDER_CHECK);
1463 #ifdef UNIV_SYNC_DEBUG
1464  mutex_create(sync_thread_mutex_key, &sync_thread_mutex,
1465  SYNC_NO_ORDER_CHECK);
1466 #endif /* UNIV_SYNC_DEBUG */
1467 
1468  /* Init the rw-lock list and create the mutex to protect it. */
1469 
1470  UT_LIST_INIT(rw_lock_list);
1471  mutex_create(rw_lock_list_mutex_key, &rw_lock_list_mutex,
1472  SYNC_NO_ORDER_CHECK);
1473 
1474 #ifdef UNIV_SYNC_DEBUG
1475  mutex_create(rw_lock_debug_mutex_key, &rw_lock_debug_mutex,
1476  SYNC_NO_ORDER_CHECK);
1477 
1478  rw_lock_debug_event = os_event_create();
1479  rw_lock_debug_waiters = FALSE;
1480 #endif /* UNIV_SYNC_DEBUG */
1481 }
1482 
1483 #ifdef UNIV_SYNC_DEBUG
1484 /******************************************************************/
1486 static
1487 void
1488 sync_thread_level_arrays_free(void)
1489 /*===============================*/
1490 
1491 {
1492  ulint i;
1493 
1494  for (i = 0; i < OS_THREAD_MAX_N; i++) {
1495  sync_thread_t* slot;
1496 
1497  slot = &sync_thread_level_arrays[i];
1498 
1499  /* If this slot was allocated then free the slot memory too. */
1500  if (slot->levels != NULL) {
1501  free(slot->levels);
1502  slot->levels = NULL;
1503  }
1504  }
1505 
1506  free(sync_thread_level_arrays);
1507  sync_thread_level_arrays = NULL;
1508 }
1509 #endif /* UNIV_SYNC_DEBUG */
1510 
1511 /******************************************************************/
1514 UNIV_INTERN
1515 void
1517 /*===========*/
1518 {
1519  ib_mutex_t* mutex;
1520 
1521  sync_array_close();
1522 
1523  for (mutex = UT_LIST_GET_FIRST(mutex_list);
1524  mutex != NULL;
1525  /* No op */) {
1526 
1527 #ifdef UNIV_MEM_DEBUG
1528  if (mutex == &mem_hash_mutex) {
1529  mutex = UT_LIST_GET_NEXT(list, mutex);
1530  continue;
1531  }
1532 #endif /* UNIV_MEM_DEBUG */
1533 
1534  mutex_free(mutex);
1535 
1536  mutex = UT_LIST_GET_FIRST(mutex_list);
1537  }
1538 
1539  mutex_free(&mutex_list_mutex);
1540 #ifdef UNIV_SYNC_DEBUG
1541  mutex_free(&sync_thread_mutex);
1542 
1543  /* Switch latching order checks on in sync0sync.cc */
1544  sync_order_checks_on = FALSE;
1545 
1546  sync_thread_level_arrays_free();
1547 #endif /* UNIV_SYNC_DEBUG */
1548 
1549  sync_initialized = FALSE;
1550 }
1551 
1552 /*******************************************************************/
1554 UNIV_INTERN
1555 void
1557 /*=================*/
1558  FILE* file)
1559 {
1560  fprintf(file,
1561  "Mutex spin waits "UINT64PF", rounds "UINT64PF", "
1562  "OS waits "UINT64PF"\n"
1563  "RW-shared spins "UINT64PF", rounds "UINT64PF", "
1564  "OS waits "UINT64PF"\n"
1565  "RW-excl spins "UINT64PF", rounds "UINT64PF", "
1566  "OS waits "UINT64PF"\n",
1567  (ib_uint64_t) mutex_spin_wait_count,
1568  (ib_uint64_t) mutex_spin_round_count,
1569  (ib_uint64_t) mutex_os_wait_count,
1570  (ib_uint64_t) rw_lock_stats.rw_s_spin_wait_count,
1571  (ib_uint64_t) rw_lock_stats.rw_s_spin_round_count,
1572  (ib_uint64_t) rw_lock_stats.rw_s_os_wait_count,
1573  (ib_uint64_t) rw_lock_stats.rw_x_spin_wait_count,
1574  (ib_uint64_t) rw_lock_stats.rw_x_spin_round_count,
1575  (ib_uint64_t) rw_lock_stats.rw_x_os_wait_count);
1576 
1577  fprintf(file,
1578  "Spin rounds per wait: %.2f mutex, %.2f RW-shared, "
1579  "%.2f RW-excl\n",
1580  (double) mutex_spin_round_count /
1581  (mutex_spin_wait_count ? mutex_spin_wait_count : 1),
1588 }
1589 
1590 /*******************************************************************/
1592 UNIV_INTERN
1593 void
1595 /*=======*/
1596  FILE* file)
1597 {
1598 #ifdef UNIV_SYNC_DEBUG
1599  mutex_list_print_info(file);
1600 
1601  rw_lock_list_print_info(file);
1602 #endif /* UNIV_SYNC_DEBUG */
1603 
1604  sync_array_print(file);
1605 
1606  sync_print_wait_info(file);
1607 }