Concept Q&A
20 questions
Comprehensive

Embedded OS Interview Questions — RTOS, Scheduling, Synchronization

Essential RTOS and embedded OS interview questions: scheduling, mutexes, semaphores, priority inversion, interrupts, memory management, and more.

Study the fundamentals first

Read the RTOS topic pages for in-depth concepts before practicing Q&A

RTOS Fundamentals

QWhat is the difference between a general-purpose OS and a real-time OS (RTOS)?

A general-purpose OS (like Linux or Windows) is designed to maximize throughput — it tries to get as much total work done as possible, sharing CPU time fairly among all processes. An RTOS, on the other hand, is designed for determinism — it guarantees that the highest-priority ready task always runs within a bounded, predictable time.

The key distinction is not speed but predictability. An RTOS kernel is typically fully preemptible: if a higher-priority task becomes ready while a lower-priority task is running, the scheduler immediately context-switches to the higher-priority task. In a general-purpose OS, the scheduler may delay preemption to finish a kernel operation or to give the current process its remaining time slice.

In embedded systems this matters because missing a deadline can cause physical harm — a motor controller that reacts 5 ms late could damage hardware, or an airbag that deploys 50 ms late is useless. Common RTOS examples include FreeRTOS, Zephyr, VxWorks, and QNX. Many embedded projects that do not need hard real-time guarantees still use an RTOS simply because the priority-based preemptive scheduler makes system behavior easier to reason about than bare-metal super-loops.

QWhat is the RTOS tick and how does it affect system behavior?

The RTOS tick is a periodic hardware timer interrupt (often called the "system tick" or "SysTick" on ARM Cortex-M) that drives the kernel's time-keeping. Each tick interrupt invokes the scheduler, which checks whether any delayed tasks have timed out, updates software timers, and potentially triggers a context switch if a higher-priority task has become ready.

Typical tick rates range from 100 Hz to 1000 Hz (1–10 ms period). The tick rate represents a fundamental trade-off: a faster tick gives finer time resolution for delays and timeouts but increases CPU overhead from the tick ISR. On resource-constrained MCUs running at tens of MHz, a 1 kHz tick can consume a non-trivial percentage of CPU time.

Some modern RTOS kernels support a tickless (or dynamic tick) mode, where the tick timer is reprogrammed to fire only when the next task timeout or delay expires, rather than at a fixed period. This is especially valuable for battery-powered devices because the MCU can stay in deep sleep for longer periods. FreeRTOS supports this via its "tickless idle" feature. An interview follow-up often asks: "What is the time resolution of vTaskDelay(1)?" — the answer is that it delays for one tick period, so the actual wall-clock delay depends on the configured tick rate and where in the current tick period the call was made.

QWhat is context switching and what happens during one?

A context switch is the process by which the RTOS saves the state of the currently running task and restores the state of the next task to run. The "context" includes the CPU registers (general-purpose registers, program counter, stack pointer, status register), and on Cortex-M processors also the floating-point registers if an FPU is present.

The typical sequence is: (1) an interrupt or system call triggers the scheduler; (2) the scheduler determines the next task to run based on priority; (3) the current task's register set is pushed onto its own stack; (4) the stack pointer is switched to the new task's stack; (5) the new task's registers are popped from its stack; (6) execution resumes at the new task's saved program counter.

On ARM Cortex-M, the hardware automatically saves a subset of registers (R0–R3, R12, LR, PC, xPSR) on exception entry, and the RTOS kernel manually saves the remaining registers (R4–R11, and optionally S16–S31 for FPU). The PendSV exception is typically used to perform the actual switch at the lowest interrupt priority, ensuring it does not preempt other ISRs. Context switch time is a critical RTOS metric — it is typically 5–20 microseconds on a Cortex-M4 running at 100 MHz, depending on FPU usage and stack depth.

Threads and Processes

QWhat is the difference between a thread and a process?

A process is an independent execution unit with its own address space, including code, data, heap, and stack segments. Processes are isolated from each other by the OS — one process cannot directly access another process's memory. This isolation requires hardware support such as an MMU (Memory Management Unit).

A thread is a lightweight execution unit that exists within a process. Multiple threads in the same process share the same address space (code, data, heap) but each has its own stack and register context. Because threads share memory, inter-thread communication is fast (just read/write shared variables, with proper synchronization), but this also makes concurrency bugs like race conditions possible.

In most embedded RTOS environments (FreeRTOS, Zephyr on Cortex-M without MPU), there is no MMU and therefore no true process isolation. What the RTOS calls "tasks" are essentially threads — they all share the same flat address space. This means a bug in one task (e.g., a buffer overflow) can corrupt another task's data or the kernel itself. Some RTOS kernels support an MPU (Memory Protection Unit) to provide limited isolation, but it is coarser than full MMU-based protection. This is a common interview question because it tests whether the candidate understands the practical implications for embedded systems where full process isolation is often unavailable.

QWhat is the difference between concurrency and parallelism?

Concurrency means multiple tasks make progress over a period of time, but they do not necessarily execute at the same physical instant. On a single-core processor, concurrency is achieved through time-slicing — the scheduler rapidly switches between tasks, giving the illusion of simultaneous execution. Parallelism means multiple tasks literally execute at the same instant on different processor cores.

In embedded systems, most MCUs are single-core, so you get concurrency but not parallelism. An RTOS scheduler on a single-core Cortex-M4 provides concurrency: the highest-priority ready task runs, and lower-priority tasks run when higher-priority tasks block or yield. True parallelism requires multi-core MCUs (e.g., dual-core ESP32, RP2040, or multi-core Cortex-A systems), which introduce additional complexity like cache coherency and cross-core synchronization.

The practical implication is that on a single-core RTOS, you do not need to worry about two tasks executing the same critical section at literally the same time — disabling interrupts or using a mutex is sufficient. On a multi-core system, disabling interrupts on one core does not prevent the other core from accessing shared data, so you need spinlocks or cross-core synchronization primitives.

Synchronization

QWhat is the difference between a mutex, a binary semaphore, and a counting semaphore?

These three primitives serve different purposes, and confusing them is one of the most common embedded interview mistakes.

A counting semaphore maintains an integer count. give (or post) increments the count; take (or wait) decrements it, blocking if the count is zero. It is used to manage a pool of identical resources (e.g., 5 DMA channels, 3 UART ports) or to count events (e.g., "3 packets have arrived").

A binary semaphore is a counting semaphore with a maximum count of 1. It is primarily used for signaling between tasks or between an ISR and a task. For example, a UART receive ISR gives the semaphore to wake up a processing task. The key point: any task (or ISR) can give it, and any task can take it — there is no concept of ownership.

A mutex (mutual exclusion) is used to protect a shared resource and has the concept of ownership — the task that locks the mutex must be the one to unlock it. This ownership property enables priority inheritance: if a high-priority task blocks on a mutex held by a low-priority task, the RTOS temporarily raises the low-priority task's priority to prevent priority inversion. Binary semaphores do not have this mechanism. In FreeRTOS, xSemaphoreCreateMutex() creates a mutex with priority inheritance, while xSemaphoreCreateBinary() creates a binary semaphore without it.

A common interview trap: "Can you use a binary semaphore instead of a mutex?" Technically yes, but you lose priority inheritance and ownership enforcement, making your system vulnerable to priority inversion and accidental releases by the wrong task.

QWhat is priority inversion and how is it solved?

Priority inversion occurs when a high-priority task is indirectly blocked by a low-priority task, and a medium-priority task runs instead of the high-priority task. Here is the classic scenario: Task L (low priority) acquires a mutex. Task H (high priority) becomes ready and tries to acquire the same mutex — it blocks because Task L holds it. Now Task M (medium priority) becomes ready. Since Task L has low priority, Task M preempts it and runs. Task H, the highest-priority task in the system, is now waiting for Task M to finish, even though Task M has lower priority than Task H. This is unbounded priority inversion.

The standard solution is priority inheritance: when Task H blocks on the mutex held by Task L, the RTOS temporarily raises Task L's priority to match Task H's. Now Task M cannot preempt Task L, so Task L finishes its critical section quickly, releases the mutex, drops back to its original priority, and Task H runs immediately. This bounds the inversion to the duration of Task L's critical section.

The Mars Pathfinder incident (1997) is the textbook example of priority inversion in the real world. The spacecraft's VxWorks RTOS had a shared bus mutex between a high-priority bus management task and a low-priority meteorological task. A medium-priority communication task caused unbounded priority inversion, triggering a watchdog reset. The fix was enabling priority inheritance on the mutex. This is a very commonly asked story in embedded interviews.

QWhat is a deadlock and how do you prevent it?

A deadlock occurs when two or more tasks are each waiting for a resource held by another, creating a circular dependency where none can make progress. The classic example: Task A holds Mutex 1 and waits for Mutex 2, while Task B holds Mutex 2 and waits for Mutex 1. Neither can proceed.

Four conditions must all be true for deadlock to occur (the Coffman conditions): (1) mutual exclusion — resources are non-shareable; (2) hold and wait — a task holds at least one resource while waiting for another; (3) no preemption — resources cannot be forcibly taken away; (4) circular wait — a circular chain of tasks each waiting for a resource held by the next.

Prevention strategies break one or more of these conditions. The most practical approach in embedded systems is lock ordering — always acquire multiple mutexes in the same global order (e.g., always lock Mutex 1 before Mutex 2), which eliminates circular wait. Other approaches include using timeout-based locking (try to acquire with a timeout, release all locks and retry if it fails), minimizing lock scope (hold locks for the shortest possible time), and avoiding nested locks entirely when possible. In a small RTOS with a handful of mutexes, a disciplined lock ordering convention documented in the code is usually sufficient. Some RTOS kernels (like Zephyr) provide deadlock detection in debug builds to catch these issues during development.

QWhat is mutual exclusion and what is a critical section?

Mutual exclusion is the requirement that when one task is accessing a shared resource, no other task can access it simultaneously. It prevents race conditions where two tasks read-modify-write the same variable and produce corrupt results.

A critical section is the region of code that accesses the shared resource and must execute atomically with respect to other tasks. In an embedded RTOS, there are several ways to implement critical sections depending on the situation: (1) disabling interrupts — the simplest method, guarantees atomicity but blocks all interrupts and increases worst-case interrupt latency, so it should only be used for very short sections; (2) using a mutex — the preferred method for longer sections, because it only blocks tasks that need the same resource while allowing unrelated tasks and interrupts to proceed; (3) using a scheduler lock — suspends task switching but keeps interrupts enabled, useful when you need atomicity against other tasks but not against ISRs.

In embedded interviews, a common follow-up is: "When would you disable interrupts instead of using a mutex?" The answer is when the critical section is very short (a few instructions) and a mutex would add unnecessary overhead, or when the shared resource is accessed by both a task and an ISR (since ISRs cannot block on a mutex). For ISR-task shared data, the typical pattern is to disable interrupts briefly in the task, or use a lock-free data structure like a ring buffer.

Interrupts

QWhat are the design rules for writing an Interrupt Service Routine (ISR)?

ISR design rules are among the most frequently tested topics in embedded interviews. The core principle is: keep ISRs short and fast.

Key rules: (1) Do not block — never call functions that can block (e.g., xSemaphoreTake, malloc, printf). ISRs run outside the context of any task and cannot be suspended. Most RTOS APIs have separate ISR-safe versions (e.g., xSemaphoreGiveFromISR in FreeRTOS). (2) Minimize execution time — long ISRs increase interrupt latency for all lower-priority interrupts. Do the minimum work: acknowledge the hardware, read/write hardware registers, set a flag or give a semaphore, then exit. Defer complex processing to a task. (3) Do not call non-reentrant functions — functions like printf, malloc, or strtok use global state and are not safe from ISR context. (4) Be aware of stack usage — ISRs typically run on a separate interrupt stack (on Cortex-M, the MSP), which is often small. Deep call chains or large local variables can overflow it. (5) Use volatile for shared variables — any variable shared between an ISR and a task must be declared volatile so the compiler does not optimize away reads/writes.

The top-half / bottom-half pattern (common in Linux) splits interrupt handling: the top half (ISR) does minimal hardware interaction and schedules a bottom half (tasklet, workqueue, or deferred procedure call) that runs later in a context where blocking is allowed. In an RTOS, the equivalent is to give a semaphore from the ISR and have a dedicated task waiting on that semaphore to do the actual processing.

QWhat is the difference between a hardware interrupt and a software interrupt?

A hardware interrupt (also called an external interrupt) is triggered by a peripheral or external signal — for example, a GPIO pin changing state, a UART receiving a byte, a timer reaching its compare value, or a DMA transfer completing. Hardware interrupts are asynchronous to the program's execution and can occur at any time.

A software interrupt is triggered deliberately by executing a special instruction (e.g., SVC on ARM, INT on x86). In embedded RTOS systems, software interrupts are commonly used for system calls — when a task needs to request a kernel service (e.g., create a task, send to a queue), it executes an SVC instruction, which traps into the kernel in a controlled way. This mechanism provides a clear boundary between user code and kernel code.

On ARM Cortex-M, the NVIC (Nested Vectored Interrupt Controller) manages interrupt priorities and nesting. Interrupts can be maskable (can be disabled via the interrupt enable bit or PRIMASK/BASEPRI registers) or non-maskable (NMI — cannot be disabled and is reserved for critical faults like clock failures). An interview may ask about interrupt nesting: on Cortex-M, a higher-priority interrupt can preempt a lower-priority ISR, and the hardware handles the register saving automatically. The priority level is set via the NVIC priority registers, and on Cortex-M, lower numeric priority values mean higher urgency.

Reentrant Functions and volatile

QWhat makes a function reentrant and why does it matter in embedded systems?

A function is reentrant if it can be safely interrupted during execution and called again ("re-entered") before the first invocation completes, without producing incorrect results. This property is essential in embedded systems because ISRs can preempt tasks at any point and may call the same function the task was executing.

For a function to be reentrant, it must: (1) not use static or global variables (or, if it does, protect them with proper synchronization); (2) not modify its own code (not relevant in modern systems with Harvard architecture or read-only flash); (3) not call non-reentrant functions. The function should operate only on its parameters and local variables, which live on the stack and are therefore unique to each invocation.

Common non-reentrant functions include strtok (uses an internal static pointer), rand (uses static state), and printf/malloc (use global data structures). Their reentrant counterparts are typically suffixed with _r (e.g., strtok_r). In practice, if a function must use shared state, you can make it thread-safe (not exactly the same as reentrant) by protecting the shared state with a mutex — but this only works for task-to-task concurrency, not for ISR-to-task concurrency, since ISRs cannot take mutexes. For ISR safety, you must either avoid shared state entirely or use lock-free techniques.

QWhat does the volatile keyword do and when is it required in embedded C?

The volatile keyword tells the compiler that a variable's value may change at any time without any action being taken by the code the compiler finds nearby. This prevents the compiler from optimizing away reads or caching the value in a register.

You must use volatile in three situations in embedded C: (1) Hardware registers — peripheral registers (e.g., a UART status register) can change at any time due to hardware events. Without volatile, the compiler might read the register once, cache it in a register, and never re-read it, causing your polling loop to spin forever. (2) Variables shared between an ISR and a task — without volatile, the compiler may assume the variable never changes within the main loop (since it cannot see the ISR modifying it) and optimize away the check. (3) Variables shared between multiple tasks in some cases, though a proper synchronization primitive (mutex, queue) usually handles the memory visibility issue.

A classic interview trap: "Does volatile make an operation atomic?" No. On a 32-bit ARM, reading a volatile uint32_t is atomic because it is a single load instruction, but incrementing it (x++) is a read-modify-write sequence that is not atomic even with volatile. For atomicity, you need either disable interrupts, use a mutex, or use hardware atomic instructions (e.g., LDREX/STREX on Cortex-M). Another trap: volatile does not guarantee memory ordering between different variables — for that, you need memory barriers.

Memory Management

QHow is memory managed in an RTOS and why is heap fragmentation a concern?

In desktop systems, dynamic memory allocation (malloc/free) is used freely. In embedded RTOS systems, dynamic allocation is used cautiously because of fragmentation, non-deterministic timing, and limited RAM.

Heap fragmentation occurs when repeated allocations and frees of different sizes create small, non-contiguous free blocks in the heap. Even if the total free memory is sufficient, no single block may be large enough to satisfy a new allocation request. On a desktop with gigabytes of RAM, this is manageable. On an MCU with 64 KB of RAM, fragmentation can cause allocation failures that are nearly impossible to reproduce and debug. FreeRTOS provides multiple heap implementations to address this: heap_1 (allocate only, never free — simplest and deterministic), heap_2 (best-fit with free, but no coalescing — fragments over time), heap_4 (first-fit with coalescing — better for mixed allocation sizes), and heap_5 (like heap_4 but spans non-contiguous memory regions).

Best practices in embedded memory management include: allocate all dynamic objects (tasks, queues, semaphores) at startup and never free them; use statically allocated objects when possible (xTaskCreateStatic in FreeRTOS); use fixed-size memory pools (block allocators) instead of general-purpose heaps for objects that are allocated and freed frequently; and avoid malloc/free entirely in safety-critical systems (MISRA C forbids dynamic memory after initialization). An interview may ask: "Is it safe to call malloc from an ISR?" The answer is no — malloc is not reentrant (it uses global data structures) and may block (if the heap is protected by a mutex).

QWhat causes stack overflow in an RTOS task and how do you detect it?

Each RTOS task has its own stack, allocated when the task is created. Stack overflow occurs when a task uses more stack space than allocated — typically due to deep function call chains, large local arrays, or excessive recursion. Because tasks share the same address space, a stack overflow in one task silently corrupts adjacent memory — which could be another task's stack, a global variable, or the heap — causing seemingly unrelated failures.

Detection methods include: (1) Stack watermarking — the RTOS fills each task's stack with a known pattern (e.g., 0xA5A5A5A5) at creation. You can periodically check how much of the pattern remains to determine high-water-mark usage. FreeRTOS provides uxTaskGetStackHighWaterMark() for this. (2) Runtime stack checking — the RTOS checks for overflow on each context switch by verifying that the stack pointer is within bounds or that a sentinel value at the stack boundary is intact. FreeRTOS provides configCHECK_FOR_STACK_OVERFLOW (methods 1 and 2). (3) MPU-based protection — on Cortex-M with an MPU, you can set a stack guard region that triggers a memory fault on overflow. (4) Static analysis — tools can analyze the call graph and compute worst-case stack depth at compile time.

Sizing task stacks correctly requires measuring actual usage under worst-case conditions (e.g., the deepest call path including any ISR stacking if using the process stack). A common rule of thumb is to set the stack to 1.5–2x the measured high-water mark during development, then tighten it for production. Always leave margin for interrupt stacking.

Watchdog Timers

QHow does a watchdog timer work in an RTOS context?

A watchdog timer (WDT) is a hardware timer that resets the processor if it is not periodically "kicked" (refreshed) by software. Its purpose is to recover from software hangs — if the main loop or a critical task gets stuck (due to a deadlock, infinite loop, or corrupted program counter), the watchdog expires and forces a system reset.

In a bare-metal super-loop, watchdog usage is straightforward: kick it once per iteration of the main loop. In an RTOS, it is more nuanced because there are multiple tasks. Simply kicking the watchdog from one task does not guarantee that other tasks are still running. A robust RTOS watchdog pattern is: (1) create a dedicated watchdog task at a priority that allows it to monitor all other tasks; (2) each monitored task periodically "checks in" (e.g., sets a flag or increments a counter); (3) the watchdog task verifies that all monitored tasks have checked in within their expected period; (4) only if all tasks have checked in does the watchdog task kick the hardware WDT.

This way, if any single task hangs, it stops checking in, the watchdog task stops kicking the WDT, and the system resets. An alternative approach uses a window watchdog — the WDT must be kicked within a specific time window (not too early, not too late), catching both hangs and tasks that run too fast (which might indicate a logic error). In safety-critical systems, an external watchdog IC (separate from the MCU) is often used because a software bug that corrupts the MCU's WDT peripheral registers could disable an internal watchdog.

Spinlocks and Atomic Operations

QWhat is a spinlock and when is it appropriate to use one?

A spinlock is a synchronization primitive where a task repeatedly checks ("spins") in a tight loop until a lock becomes available. Unlike a mutex, which causes a blocked task to be descheduled (sleeping), a spinlock keeps the task actively running on the CPU, burning cycles while waiting.

On a single-core system, spinlocks are almost always a bad idea. If Task A holds the spinlock and Task B spins waiting for it, Task B wastes its entire time slice spinning and prevents Task A from running (if they are the same priority) or prevents lower-priority tasks from making progress. A mutex is far better because the blocked task yields the CPU, allowing other tasks (including the lock holder) to run.

Spinlocks are appropriate on multi-core systems for very short critical sections. On a dual-core MCU (e.g., ESP32, RP2040), if Core 0 holds a spinlock for just a few instructions, it is cheaper for Core 1 to spin than to perform a full context switch. The key requirement is that the lock is held for an extremely short time (microseconds). Spinlocks also require hardware atomic instructions (test-and-set, compare-and-swap) to implement correctly. On Cortex-M, LDREX/STREX (exclusive load/store) instructions provide this capability. In an embedded interview, the safe answer is: "Avoid spinlocks in user code on single-core RTOS systems; use mutexes instead. Spinlocks are suitable for multi-core systems or within the kernel itself for very brief critical sections."

QWhat are atomic operations and why do they matter?

An atomic operation is an operation that completes in its entirety without being interrupted — it is indivisible. From the perspective of any other task or ISR, an atomic operation either has not started or has fully completed; there is no visible intermediate state.

On a 32-bit ARM Cortex-M processor, a single 32-bit load or store instruction is naturally atomic — reading or writing a uint32_t at an aligned address is one instruction. However, a 64-bit access on a 32-bit core requires two instructions and is NOT atomic. Similarly, read-modify-write operations like counter++ are NOT atomic because they compile to three instructions: load, increment, store. If an ISR fires between the load and store and modifies the same variable, the ISR's update is lost.

Solutions for making compound operations atomic include: (1) disabling interrupts around the operation (simple but increases interrupt latency); (2) using LDREX/STREX (exclusive load/store) instructions on Cortex-M, which implement lock-free atomic operations by detecting if another context modified the variable between the load and store, and retrying if so; (3) using C11 _Atomic types or compiler intrinsics like __atomic_fetch_add; (4) using an RTOS mutex for longer critical sections. The choice depends on the duration of the critical section and whether it is accessed from ISR context (where mutexes are not available).

Advanced Topics

QWhat is the difference between preemptive and cooperative scheduling?

In preemptive scheduling, the RTOS can forcibly suspend a running task at any tick interrupt or event and switch to a higher-priority task. The running task does not need to voluntarily give up the CPU — the kernel takes it away. This is the default mode in most RTOS kernels (FreeRTOS, Zephyr, VxWorks) and ensures that the highest-priority ready task always runs with bounded latency.

In cooperative scheduling, a task runs until it explicitly yields the CPU (by calling a blocking API, taskYIELD(), or a delay function). No tick-based preemption occurs. This is simpler (fewer race conditions, since task switches only happen at known points) but dangerous: a single task that forgets to yield or enters a long computation starves all other tasks. Cooperative scheduling is sometimes used in very simple systems or in combination with preemptive scheduling (e.g., tasks at the same priority time-slice cooperatively in FreeRTOS when configUSE_TIME_SLICING is enabled).

Most embedded interviews expect you to advocate for preemptive scheduling and explain its trade-offs: it requires more careful synchronization (critical sections, mutexes) because a task can be preempted at any point, but it provides guaranteed responsiveness. A useful comparison: cooperative scheduling guarantees mutual exclusion between points where you yield (no need for mutexes if you never yield inside a critical section), but it cannot guarantee response time because any task can hog the CPU indefinitely.

QWhat are common inter-task communication mechanisms in an RTOS?

Beyond mutexes and semaphores (which are synchronization primitives), RTOS kernels provide several mechanisms for tasks to exchange data.

Message queues are the workhorse of RTOS inter-task communication. A queue is a FIFO buffer where one task (or ISR) sends messages and another task receives them. The sender and receiver are decoupled — the sender does not need to know when the receiver processes the message. Queues are ISR-safe (using FromISR variants in FreeRTOS) and handle both synchronization and data transfer in one primitive. They are ideal for producer-consumer patterns, such as an ISR that reads a sensor and posts the value to a queue for a processing task.

Event flags (or event groups in FreeRTOS) allow a task to wait for one or more events to occur, specified as bits in a bitmask. A task can wait for "any" (OR) or "all" (AND) of the specified bits. This is useful for synchronizing on multiple conditions — for example, waiting for both "network connected" AND "configuration loaded" before starting the main application.

Direct-to-task notifications (FreeRTOS-specific) are a lightweight alternative to semaphores and event groups. Each task has a built-in 32-bit notification value that can be used as a binary semaphore, counting semaphore, event group, or simple mailbox, with significantly lower RAM overhead than a dedicated synchronization object.

Shared memory with mutexes is the simplest approach — tasks read and write shared global data protected by a mutex. It works well for small, infrequently updated data (like a configuration struct) but scales poorly and creates coupling between tasks. Queues and notifications are generally preferred for new designs because they naturally decouple producers from consumers.