Struct mariadb_sys::MDL_context
source · #[repr(C)]pub struct MDL_context {
pub m_wait: MDL_wait,
pub m_tickets: [MDL_context_Ticket_list; 3],
pub m_owner: *mut MDL_context_owner,
pub m_needs_thr_lock_abort: bool,
pub m_LOCK_waiting_for: mysql_prlock_t,
pub m_waiting_for: *mut MDL_wait_for_subgraph,
pub m_pins: *mut LF_PINS,
pub m_deadlock_overweight: uint,
pub lock_warrant: *mut MDL_context,
}
Expand description
Context of the owner of metadata locks. I.e. each server connection has such a context.
Fields§
§m_wait: MDL_wait
If our request for a lock is scheduled, or aborted by the deadlock detector, the result is recorded in this class.
m_tickets: [MDL_context_Ticket_list; 3]
Lists of all MDL tickets acquired by this connection.
§Lists of MDL tickets:
The entire set of locks acquired by a connection can be separated in three subsets according to their duration: locks released at the end of statement, at the end of transaction and locks are released explicitly.
Statement and transactional locks are locks with automatic scope. They are accumulated in the course of a transaction, and released either at the end of uppermost statement (for statement locks) or on COMMIT, ROLLBACK or ROLLBACK TO SAVEPOINT (for transactional locks). They must not be (and never are) released manually, i.e. with release_lock() call.
Tickets with explicit duration are taken for locks that span multiple transactions or savepoints. These are: HANDLER SQL locks (HANDLER SQL is transaction-agnostic), LOCK TABLES locks (you can COMMIT/etc under LOCK TABLES, and the locked tables stay locked), user level locks (GET_LOCK()/RELEASE_LOCK() functions) and locks implementing “global read lock”.
Statement/transactional locks are always prepended to the beginning of the appropriate list. In other words, they are stored in reverse temporal order. Thus, when we rollback to a savepoint, we start popping and releasing tickets from the front until we reach the last ticket acquired after the savepoint.
Locks with explicit duration are not stored in any particular order, and among each other can be split into four sets:
[LOCK TABLES locks] [USER locks] [HANDLER locks] [GLOBAL READ LOCK locks]
The following is known about these sets:
GLOBAL READ LOCK locks are always stored last. This is because one can’t say SET GLOBAL read_only=1 or FLUSH TABLES WITH READ LOCK if one has locked tables. One can, however, LOCK TABLES after having entered the read only mode. Note, that subsequent LOCK TABLES statement will unlock the previous set of tables, but not the GRL! There are no HANDLER locks after GRL locks because SET GLOBAL read_only performs a FLUSH TABLES WITH READ LOCK internally, and FLUSH TABLES, in turn, implicitly closes all open HANDLERs. However, one can open a few HANDLERs after entering the read only mode. LOCK TABLES locks include intention exclusive locks on involved schemas and global intention exclusive lock.
m_owner: *mut MDL_context_owner
§m_needs_thr_lock_abort: bool
TRUE - if for this context we will break protocol and try to acquire table-level locks while having only S lock on some table. To avoid deadlocks which might occur during concurrent upgrade of SNRW lock on such object to X lock we have to abort waits for table-level locks for such connections. FALSE - Otherwise.
m_LOCK_waiting_for: mysql_prlock_t
Read-write lock protecting m_waiting_for member.
@note The fact that this read-write lock prefers readers is important as deadlock detector won’t work correctly otherwise. @sa Comment for MDL_lock::m_rwlock.
m_waiting_for: *mut MDL_wait_for_subgraph
Tell the deadlock detector what metadata lock or table definition cache entry this session is waiting for. In principle, this is redundant, as information can be found by inspecting waiting queues, but we’d very much like it to be readily available to the wait-for graph iterator.
m_pins: *mut LF_PINS
§m_deadlock_overweight: uint
§lock_warrant: *mut MDL_context
This is for the case when the thread opening the table does not acquire the lock itself, but utilizes a lock guarantee from another MDL context.
For example, in InnoDB, MDL is acquired by the purge_coordinator_task, but the table may be opened and used in a purge_worker_task. The coordinator thread holds the lock for the duration of worker’s purge job, or longer, possibly reusing shared MDL for different workers and jobs.