Quick Cap
An ADC (Analog-to-Digital Converter) bridges the analog world and the digital domain inside a microcontroller by converting continuous voltage levels into discrete numerical values that firmware can process. Every sensor reading -- temperature, pressure, current, light intensity -- passes through an ADC before the CPU can act on it, making ADC configuration one of the most practical skills an embedded engineer can have. Interviewers test whether you understand the full signal chain from analog input to digital result: resolution trade-offs, sampling theory, noise management, and efficient data acquisition patterns like DMA.
Key Facts:
- Resolution: 8-bit to 24-bit (typically 12-bit on modern MCUs), determines the smallest voltage change the ADC can detect
- Sampling rate: Hundreds of kSPS to several MSPS, governed by Nyquist theorem and conversion architecture
- LSB voltage: Vref / 2^n -- the voltage represented by one least significant bit
- Quantization error: Inherent +/-0.5 LSB uncertainty in every conversion result
- Oversampling: Trading speed for resolution -- every 4x oversampling gains ~1 effective bit
- DMA integration: Essential for high-speed or multi-channel acquisition without CPU intervention
Deep Dive
At a Glance
| Characteristic | Detail |
|---|---|
| Resolution range | 8-bit to 24-bit (12-bit most common on MCUs) |
| Sampling rate | 100 kSPS (SAR typical) to 1+ MSPS; sigma-delta: lower rate, higher resolution |
| Common architectures | SAR, sigma-delta, flash, pipeline |
| Input range | 0 V to Vref (single-ended) or +/-Vref (differential) |
| Channels | 1 to 20+ multiplexed inputs on typical MCUs |
| Error detection | Watchdog thresholds, overrun flags, calibration registers |
How ADCs Work
Analog-to-digital conversion is a three-step process: sampling, quantization, and encoding.
Sampling captures the instantaneous analog voltage at a specific moment in time. A sample-and-hold (S&H) circuit -- essentially a switch followed by a small capacitor -- closes briefly to charge the capacitor to the input voltage, then opens to hold that voltage steady while the conversion takes place. The hold phase is critical: the ADC core needs a stable voltage throughout the entire conversion cycle. If the input moves during conversion, the result is meaningless.
Quantization maps the held voltage to the nearest discrete level. An n-bit ADC divides the reference voltage range into 2^n equal steps. The analog voltage falls somewhere between two adjacent levels, and the ADC rounds to the nearest one. This rounding introduces an unavoidable error -- quantization error -- of at most +/-0.5 LSB. This is not a flaw; it is a mathematical consequence of representing a continuous quantity with a finite number of bits.
Encoding translates the quantized level into a binary number and stores it in the ADC data register. Firmware reads this register to obtain the raw conversion result. The relationship between the raw value and the actual voltage is: V_in = (raw_value / 2^n) * Vref.
The entire cycle -- sample, convert, latch result -- takes a finite amount of time called the conversion time, which limits how fast the ADC can sample. The sampling time (how long the S&H switch stays closed) is typically configurable and must be long enough for the capacitor to fully charge to the input voltage, especially when the source impedance is high.
Resolution and Quantization
Resolution defines the granularity of an ADC measurement. The LSB voltage -- the smallest voltage step the ADC can distinguish -- is calculated as:
LSB = Vref / 2^n
where Vref is the reference voltage and n is the number of bits. The quantization error is always +/-0.5 LSB, representing the worst-case rounding error inherent in digitization.
Example: a 12-bit ADC with Vref = 3.3 V has LSB = 3.3 / 4096 = 0.806 mV. The maximum quantization error is +/-0.403 mV. This means voltage changes smaller than ~0.4 mV cannot be reliably detected.
Resolution vs accuracy: resolution tells you how finely the ADC divides the range; accuracy tells you how close the result is to the true voltage. A 16-bit ADC with poor calibration, noisy Vref, or high offset error can be less accurate in practice than a well-calibrated 12-bit ADC. The metric that captures this reality is ENOB (Effective Number of Bits) -- the number of bits of resolution that actually contribute useful information after accounting for all noise and error sources. ENOB is always less than or equal to the nominal resolution. It is calculated from the measured SINAD: ENOB = (SINAD - 1.76) / 6.02.
| Resolution | Levels | LSB at 3.3 V | Typical Use |
|---|---|---|---|
| 8-bit | 256 | 12.9 mV | Basic monitoring, battery level |
| 10-bit | 1,024 | 3.22 mV | General-purpose sensors |
| 12-bit | 4,096 | 0.806 mV | Precision sensors, motor control |
| 16-bit | 65,536 | 50.4 uV | High-precision measurement |
| 24-bit | 16.7M | ~0.2 uV | Laboratory instruments, audio |
Be ready to calculate LSB on a whiteboard. Interviewers commonly ask: "You have a 12-bit ADC with a 3.3 V reference. What is the smallest voltage change you can detect?" Answer: 3.3 V / 4096 = 0.806 mV. Follow up with: "And the quantization error is plus or minus half of that, about 0.4 mV."
Sampling Theory and Nyquist
The Nyquist-Shannon sampling theorem states that to faithfully represent an analog signal, you must sample at a rate at least twice the highest frequency component present in the signal:
fs >= 2 * fmax
If the sampling rate violates this requirement, aliasing occurs: high-frequency components fold back into lower frequencies and appear as spurious signals in the digital data. Aliasing cannot be removed by digital filtering after the fact -- once the data is sampled, the aliased components are indistinguishable from real low-frequency signals.
To prevent aliasing, an anti-aliasing filter (a low-pass analog filter) must be placed before the ADC input. This filter attenuates all frequency components above fs/2 so they do not appear in the sampled data. The filter's cutoff frequency should be set at or below the Nyquist frequency (fs/2).
In practice, real analog filters do not have infinitely sharp rolloff. A signal at exactly fs/2 is only partially attenuated. This is why the practical engineering rule is to sample at 5x to 10x the signal bandwidth, giving the anti-aliasing filter room to attenuate out-of-band content before it reaches the ADC.
Example: to accurately digitize a 1 kHz sensor signal, the theoretical minimum sampling rate is 2 kHz. But a practical system would sample at 5-10 kHz and use a low-pass filter with a cutoff around 1-2 kHz to reject noise above the signal band.
Once aliased frequencies appear in your sampled data, no amount of digital filtering can remove them. The anti-aliasing filter must be an analog filter placed before the ADC. This is a common interview trap -- candidates sometimes suggest fixing aliasing in software.
ADC Architectures
Different ADC architectures trade off speed, resolution, cost, and power consumption. Knowing which type to use for a given application is a practical design skill:
| Architecture | Speed | Resolution | Power | Cost | Typical Use Case |
|---|---|---|---|---|---|
| SAR | Medium (100 kSPS - 5 MSPS) | 8-18 bit | Low-medium | Low | Most MCU on-chip ADCs, general-purpose sensors |
| Sigma-delta | Low (10 SPS - 100 kSPS) | 16-24 bit | Low | Medium | Precision measurement, load cells, thermocouples |
| Flash | Very high (100+ MSPS) | 4-8 bit | Very high | High | RF, video, oscilloscopes |
| Pipeline | High (10-200 MSPS) | 10-16 bit | High | High | Communications, radar, high-speed data acquisition |
SAR (Successive Approximation Register) ADCs are by far the most common in embedded systems. They work by performing a binary search: the internal DAC starts at the midpoint of the range and compares against the input, then halves the remaining range on each clock cycle. An n-bit conversion takes n clock cycles. SAR ADCs offer a good balance of speed, resolution, and power, which is why nearly every microcontroller includes one.
Sigma-delta ADCs achieve very high resolution by massively oversampling the input with a 1-bit ADC and then using a digital decimation filter to produce high-resolution results. They are inherently slow but extremely precise, making them ideal for weighing scales, temperature measurement, and audio applications.
Flash ADCs use 2^n - 1 parallel comparators to convert the entire input range in a single clock cycle. They are the fastest architecture but limited to low resolution because the comparator count doubles with every additional bit. An 8-bit flash ADC requires 255 comparators.
Pipeline ADCs break the conversion into stages, with each stage resolving a few bits and passing the residual error to the next stage. This allows high throughput with moderate resolution, but introduces a multi-cycle latency between sampling and result availability.
Oversampling and Averaging
Oversampling is a technique that trades sampling speed for improved effective resolution. The principle: by sampling a signal at a rate much higher than necessary and then averaging, you reduce the noise floor and effectively gain additional bits of resolution.
The rule of thumb: every 4x increase in oversampling rate yields approximately 1 additional bit of effective resolution. To gain 2 extra bits, you oversample by 16x. To gain 3 extra bits, by 64x. The math behind this relies on the fact that uncorrelated quantization noise averages out over multiple samples, reducing the noise power by the oversampling ratio.
Example: a 12-bit ADC sampled at 4x the required rate and decimated (averaged and downsampled) yields roughly 13 effective bits. At 16x oversampling, you approach 14 effective bits. At 256x oversampling, you can theoretically gain 4 bits, reaching 16 effective bits from a 12-bit ADC.
Hardware oversampling is available on many modern MCUs (e.g., STM32 series) and performs the accumulation and right-shift in the ADC peripheral itself, delivering a higher-resolution result directly in the data register without any CPU involvement. Software averaging achieves the same result but uses CPU cycles to accumulate and divide. Hardware oversampling is preferred when available because it is faster and does not load the CPU.
Important caveat: oversampling only works if there is noise in the signal (or dithering is applied). If the input is perfectly static and noise-free, every sample returns the same value, and averaging gains nothing. In practice, real-world signals always contain enough thermal noise for oversampling to be effective.
Signal Conditioning
The quality of an ADC measurement depends as much on what happens before the ADC input as on the ADC itself. Signal conditioning prepares the analog signal to match the ADC's input requirements.
Input impedance: most SAR ADCs have an internal sampling capacitor (typically 5-20 pF) that must charge through the source impedance within the sampling window. If the source impedance is too high, the capacitor does not fully charge, and the conversion result is lower than the true voltage. The datasheet specifies a maximum recommended source impedance -- typically a few kOhm for fast sampling rates. For high-impedance sources (e.g., piezoelectric sensors, voltage dividers with large resistors), an op-amp buffer must be placed between the source and the ADC input.
Voltage dividers: when the signal voltage exceeds the ADC's input range (e.g., measuring a 12 V battery with a 3.3 V ADC), a resistive voltage divider scales the signal down. The divider's output impedance must be low enough for the ADC's sampling capacitor, or a buffer amplifier is needed.
Anti-aliasing filter: as discussed in the sampling theory section, a low-pass filter before the ADC input prevents high-frequency noise from aliasing into the measurement band. A simple RC filter is often sufficient for low-frequency sensor signals.
Reference voltage stability: the reference voltage (Vref) is the ruler against which every measurement is made. If Vref drifts or is noisy, every conversion result drifts proportionally. For precision applications, use a dedicated low-noise voltage reference IC rather than the MCU's supply rail. Even decoupling the Vref pin with a 100 nF ceramic capacitor plus a 1 uF tantalum makes a significant difference.
Conversion Modes
Most MCU ADC peripherals support several conversion modes, each suited to different use cases:
| Mode | Description | When to Use |
|---|---|---|
| Single-shot | One conversion per trigger, then stops | Periodic sensor polling, low-power wake-and-measure |
| Continuous | Converts repeatedly without re-triggering | Real-time monitoring, control loops requiring constant updates |
| Scan mode | Sequentially converts multiple channels, then stops or repeats | Reading multiple sensors (temperature + voltage + current) per cycle |
| Injected / Triggered | Conversion triggered by external event (timer, GPIO, comparator) | Synchronized sampling (e.g., motor current at specific PWM phase) |
Single-shot mode is the simplest and most power-efficient. The CPU triggers a conversion, waits for the result (polling the end-of-conversion flag or using an interrupt), and then does something with the value. The ADC is idle between conversions, saving power.
Continuous mode starts a new conversion immediately after the previous one completes. This is useful when firmware always needs the latest value and the ADC result is read asynchronously -- for example, a control loop that reads the latest ADC value on every timer tick.
Scan mode automatically sequences through a list of channels, converting each one in turn. Combined with DMA, this is the standard pattern for reading multiple sensors with a single configuration step and no per-channel CPU involvement.
Injected (triggered) mode is critical for motor control and power electronics, where current sensing must occur at a precise moment within the PWM cycle. A timer output event triggers the ADC conversion, ensuring the measurement happens exactly when the switch is in the desired state.
DMA Integration
For any application that requires continuous, high-speed, or multi-channel ADC operation, DMA (Direct Memory Access) is essential. Without DMA, the CPU must manually read each conversion result from the ADC data register before the next conversion overwrites it. At high sampling rates or with many channels, this quickly consumes all available CPU time or -- worse -- results in data loss when the CPU cannot keep up.
The standard pattern is circular DMA: the DMA controller is configured to transfer each ADC result into a memory buffer and automatically wrap around to the beginning when the buffer is full. The ADC runs in continuous or scan mode, DMA transfers happen in the background, and the CPU is free to process data at its own pace.
Double-buffering extends this pattern for applications where data must be processed without risking corruption. Two buffers are allocated. While DMA fills one buffer, the CPU processes the other. When DMA completes a buffer, the roles swap. The DMA half-transfer and transfer-complete interrupts signal the CPU which half is safe to read. This guarantees that the CPU never reads a buffer that DMA is actively writing to.
The key insight for interviews: DMA does not make the ADC faster -- it makes data transfer free from the CPU's perspective. The ADC converts at the same rate regardless of whether DMA or polling is used. The benefit is CPU availability: with DMA, the processor can run control algorithms, handle communication, or sleep to save power while the ADC and DMA hardware handle data acquisition autonomously.
Common ADC Errors and Calibration
No ADC is perfect. Understanding the types of errors and how to mitigate them is essential for achieving accurate measurements:
| Error Type | Description | Typical Magnitude | Mitigation |
|---|---|---|---|
| Offset error | Constant shift -- output is nonzero when input is 0 V | 1-5 LSB | Measure at 0 V input, subtract offset from all readings |
| Gain error | Scale factor error -- output slope differs from ideal | 0.1-1% | Two-point calibration (measure at 0 V and Vref, compute correction factor) |
| INL (Integral Nonlinearity) | Maximum deviation of actual transfer function from ideal straight line | 1-4 LSB | Multi-point calibration, lookup table correction |
| DNL (Differential Nonlinearity) | Variation in actual step size from ideal 1-LSB step | 0.5-2 LSB | Indicates missing codes if DNL reaches -1 LSB; use higher-resolution ADC |
| Noise | Random fluctuation in repeated measurements of same input | 0.5-3 LSB RMS | Averaging, oversampling, proper PCB layout, decoupling |
Calibration is the process of measuring and correcting these errors. Most MCU ADCs provide a built-in calibration routine that measures internal offset and gain errors and applies correction factors automatically. Running this calibration at startup (and periodically during operation if temperature varies significantly) is a best practice that many engineers overlook.
Temperature effects: ADC offset and gain drift with temperature. The reference voltage also drifts unless a temperature-compensated reference is used. For precision applications, periodic recalibration or temperature-compensated correction is necessary. Datasheets specify offset and gain temperature coefficients in LSB/degC or ppm/degC.
Analog In ──→ [ Sample & Hold ] ──→ [ SAR / ΣΔ Core ] ──→ [ Data Register ](capture Vin) (n-bit conversion) (CPU or DMA reads)|← sampling time →|← conversion time →||←────── total conversion time ─────────→|
Debugging Story
A team developing a battery management system observed that ADC readings for cell voltages fluctuated by 10-15 mV on every conversion, far exceeding the expected +/-0.4 mV quantization noise of their 12-bit ADC. The readings were stable during bench testing with a lab power supply but became noisy once the system was integrated with the actual battery pack and switching regulator. After days of investigating firmware bugs and trying software averaging (which only masked the symptom), an engineer noticed that the Vref pin had no decoupling capacitor -- just a direct trace to the 3.3 V rail, which carried switching noise from the regulator. Adding a 100 nF ceramic capacitor and a 1 uF tantalum on the Vref pin, plus a ferrite bead to isolate the ADC analog supply, reduced the noise to under 1 LSB. The lesson: the ADC reference voltage is the measurement ruler -- if the ruler vibrates, every measurement vibrates with it. Always decouple Vref and, for precision applications, use a dedicated low-noise voltage reference.
What interviewers want to hear is that you understand the complete ADC signal chain -- from analog input conditioning through sampling, quantization, and digital readout. They want you to calculate LSB and quantization error on the spot, explain why Nyquist matters and what aliasing looks like, articulate the trade-offs between ADC architectures, and demonstrate practical awareness of noise sources, calibration, and DMA-based acquisition patterns. Showing that you have debugged real ADC accuracy issues (reference noise, impedance mismatch, missing anti-aliasing filters) sets you apart from candidates who only know the register-level API.
Interview Focus
Classic ADC Interview Questions
Q1: "How do you choose ADC resolution for an application?"
Model Answer Starter: "I start by determining the smallest voltage change the system needs to detect and work backward to the required resolution. The LSB voltage is Vref divided by 2^n, so I need enough bits that the LSB is smaller than my required measurement granularity. For example, if I need to detect 1 mV changes with a 3.3 V reference, I need at least 12 bits because 3.3 V / 4096 = 0.8 mV, which is below my 1 mV target. But I also consider ENOB -- the effective number of bits after accounting for noise -- because the datasheet resolution is an upper bound, not a guarantee. If the system noise floor is 5 mV, a 16-bit ADC gives me no real advantage over 12 bits unless I also address the noise sources."
Q2: "What is the Nyquist theorem and how does it affect ADC design?"
Model Answer Starter: "The Nyquist theorem states that the sampling frequency must be at least twice the highest frequency component in the signal. If I violate this, aliasing occurs -- high frequencies fold into lower frequencies and corrupt the data. This is irreversible once sampled, so I always place an analog anti-aliasing low-pass filter before the ADC. In practice, I sample at 5 to 10 times the signal bandwidth to give the filter adequate rolloff margin. For example, a 1 kHz signal gets sampled at 5-10 kHz with a filter cutoff around 2 kHz."
Q3: "How does oversampling improve ADC performance?"
Model Answer Starter: "Oversampling trades sampling speed for effective resolution. By sampling at a rate much higher than needed and then averaging and decimating, I reduce the quantization noise floor. The rule is that every 4x oversampling gives approximately one additional effective bit of resolution. So a 12-bit ADC sampled at 16x and decimated yields about 14 effective bits. The key prerequisite is that there must be noise present in the signal -- at least 1 LSB of noise -- for oversampling to work. Many modern MCUs have hardware oversampling built into the ADC peripheral, which is preferable because it does not consume CPU cycles."
Q4: "What causes ADC measurement errors and how do you debug them?"
Model Answer Starter: "ADC errors fall into systematic and random categories. Systematic errors include offset error, gain error, and nonlinearity -- these are fixed patterns that can be calibrated out. Random errors include thermal noise, reference voltage noise, and external interference. When debugging noisy readings, I start by checking the reference voltage with an oscilloscope for noise or ripple. Then I verify the source impedance is within the ADC's specification for the chosen sampling time. I check for missing decoupling capacitors on Vref and analog supply pins. I apply a known, stable DC voltage and measure the noise floor to isolate whether the issue is the signal source or the ADC chain itself."
Q5: "When would you use DMA with an ADC?"
Model Answer Starter: "I use DMA whenever the ADC is running continuously or sampling multiple channels, because without DMA the CPU must read the data register before the next conversion overwrites it. At high sampling rates this can consume most of the CPU's time or cause data loss. The standard pattern is circular DMA with a buffer -- the ADC converts, DMA transfers each result to memory automatically, and the CPU processes data when it is ready. For real-time processing I use double-buffering: DMA fills one half of the buffer while the CPU processes the other half, with half-transfer and transfer-complete interrupts managing the swap."
Trap Alerts
- Don't say: "Higher resolution is always better" -- resolution without considering noise, ENOB, and system-level accuracy is meaningless. A noisy 16-bit ADC can be worse than a clean 12-bit one.
- Don't forget: The anti-aliasing filter must be analog and placed before the ADC. Candidates who suggest fixing aliasing in software reveal a fundamental misunderstanding of sampling theory.
- Don't ignore: Reference voltage quality. Vref noise directly corrupts every conversion result. Always mention decoupling and, for precision work, a dedicated voltage reference.
Follow-up Questions
- "How would you measure a 0-24 V signal with a 3.3 V ADC?"
- "What is the difference between ENOB and nominal resolution, and how do you measure ENOB?"
- "How do you synchronize ADC sampling with a PWM cycle for motor current sensing?"
- "What PCB layout practices improve ADC accuracy?"
Practice
❓ A 12-bit ADC with a 3.3 V reference has an LSB voltage of approximately:
❓ What is the minimum sampling rate needed to accurately digitize a 5 kHz signal?
❓ How many times must you oversample to gain 2 additional effective bits of resolution?
❓ Which ADC architecture is most commonly found in microcontroller on-chip ADCs?
❓ What happens if you sample an analog signal below the Nyquist rate?
Real-World Tie-In
Battery Management System -- Monitored 16 lithium-ion cell voltages using a 16-bit sigma-delta ADC with multiplexed inputs. Voltage dividers scaled each cell voltage into the ADC range, and a dedicated low-drift voltage reference ensured measurement accuracy across -20 to +60 degC. Achieved +/-1 mV accuracy per cell by running factory calibration at two known voltages and applying offset/gain correction in firmware.
Industrial Vibration Sensor -- Sampled a piezoelectric accelerometer at 50 kHz using a 12-bit SAR ADC with DMA circular buffer and double-buffering. A second-order Butterworth anti-aliasing filter at 20 kHz prevented high-frequency electrical noise from aliasing into the vibration band. The CPU ran FFT analysis on completed buffers while DMA filled the next buffer, achieving continuous real-time frequency analysis with under 5% CPU utilization.
IoT Environmental Monitor -- Read temperature, humidity, and ambient light sensors through a single 12-bit ADC in scan mode with 16x hardware oversampling, yielding ~14 effective bits without any CPU involvement. Single-shot conversions triggered every 10 seconds kept average current draw under 5 uA, extending coin-cell battery life to over 3 years.