Resolution and Sampling
QHow do you calculate the voltage represented by one LSB, and why does it matter?
The least significant bit (LSB) voltage is the smallest voltage change the ADC can distinguish — the quantization step size:
LSB = V_REF / 2^n
where V_REF is the reference voltage and n is the resolution in bits. For a 12-bit ADC with a 3.3V reference: LSB = 3.3 / 4096 = 0.806 mV. To convert a raw ADC reading back to voltage: V = raw_value * V_REF / 2^n. For example, a raw reading of 2048 corresponds to 2048 * 3.3 / 4096 = 1.65V — exactly half the reference, as expected for the midpoint code.
The LSB voltage matters because it defines the quantization limit — any signal change smaller than 1 LSB is invisible to the converter. It directly determines whether the ADC has sufficient precision for your measurement. Consider a temperature sensor that outputs 10 mV per degree C. If you need 0.1-degree resolution, you need an LSB of 1 mV or less. With a 3.3V reference, a 12-bit ADC (0.806 mV LSB) barely meets this requirement, while a 10-bit ADC (3.22 mV LSB) does not. The options are: use a higher-resolution ADC, add an analog gain stage to amplify the signal before conversion (an op-amp with gain of 10x would make the signal 100 mV/degC, well above the LSB), or use oversampling to gain effective bits in software.
A common interview trap: candidates forget that the LSB calculation assumes a perfect ADC. Real ADCs have noise, offset error, gain error, and nonlinearity (INL/DNL) that degrade the effective resolution below the theoretical LSB. Always check the ENOB (Effective Number of Bits) specification to understand what resolution you actually get.
QExplain the Nyquist theorem and what happens when you violate it (aliasing).
The Nyquist-Shannon sampling theorem states that to perfectly reconstruct a continuous signal from discrete samples, the sampling rate must be at least twice the highest frequency component present in the signal: f_sample >= 2 * f_max. The frequency f_sample / 2 is called the Nyquist frequency — it is the upper boundary of signals that can be correctly represented at that sample rate.
When the sampling rate is too low, aliasing occurs: frequency components above the Nyquist frequency "fold back" into the representable spectrum and appear as false low-frequency signals that are mathematically indistinguishable from real ones. For a concrete example: sampling a 60 Hz signal at 100 Hz produces an alias at 40 Hz (|100 - 60| = 40). A 150 Hz signal sampled at 200 Hz aliases to 50 Hz. Once aliased, the false signal cannot be removed by any amount of digital filtering because it occupies the same frequency bin as a legitimate signal would. The data is permanently corrupted.
Prevention requires an anti-aliasing filter — an analog low-pass filter placed physically before the ADC input that attenuates all frequency components above the Nyquist frequency. This must be analog because digital filtering happens after sampling, which is too late. In practice, you should not sample at exactly 2x the signal bandwidth because that assumes a perfect brick-wall filter (infinite roll-off), which does not exist in analog electronics. Instead, sample at 5-10x the signal bandwidth to provide transition band for a practical filter. For example, if your signal of interest goes up to 1 kHz, sample at 5-10 kHz and use a simple 2nd-order RC filter with a -3 dB cutoff around 1.5-2 kHz. The excess sampling rate relaxes the filter requirements from an impractical brick wall to a gentle, inexpensive roll-off.
QHow does oversampling improve ADC resolution, and what is the 4x rule?
Oversampling exploits the statistical property that uncorrelated noise averages toward zero. By taking multiple samples of a signal that has random noise superimposed on it and averaging them, the noise component is reduced (it partially cancels across samples) while the deterministic signal component remains. This effectively increases the signal-to-noise ratio, which translates to additional bits of resolution. The key relationship is:
To gain 1 additional bit of effective resolution, oversample by 4x and average (then right-shift by 1 bit).
The math: each additional bit of resolution requires a 6 dB improvement in SNR. Averaging N samples improves SNR by 10 * log10(N) dB. For 6 dB, N = 10^(6/10) = 3.98, approximately 4. So a 12-bit ADC sampled at 4x and averaged yields an effective 13-bit result. For 14-bit effective resolution, you need 4^2 = 16x oversampling. For 16-bit effective resolution from a 12-bit ADC, you need 4^4 = 256x oversampling. STM32G4 and STM32H7 ADCs have a hardware oversampler that performs this averaging automatically with configurable ratios up to 256x and a programmable right-shift, avoiding any CPU involvement.
The critical catch: oversampling only works if the input signal contains noise spanning at least 1 LSB peak-to-peak. If the signal is perfectly static (or the noise is smaller than 1 LSB), all samples produce the same digital code, and averaging yields the same value — no additional resolution is gained. In very clean analog designs, you may need to intentionally inject a small amount of dithering noise (white noise at the LSB level) to enable oversampling to work. Additionally, oversampling reduces the effective sample rate by the oversampling ratio, so a 1 Msps ADC with 16x oversampling effectively samples at 62.5 ksps — verify this still satisfies Nyquist for your signal bandwidth.
Architectures
QWhat is the difference between SAR and sigma-delta ADC architectures?
A SAR (Successive Approximation Register) ADC works by performing a binary search. It samples the input voltage onto a hold capacitor, then iteratively compares it against an internal DAC: set the MSB, compare, keep or clear, move to the next bit. An n-bit conversion takes n comparison cycles plus the initial sample-and-hold time. SAR ADCs offer moderate resolution (8-16 bits), fast conversion (typically 0.1-5 Msps), low power, and low latency — a single conversion produces a valid result with no pipeline delay. They are the dominant architecture in microcontrollers (STM32, NXP, TI MSP430, Microchip PIC) because of their excellent balance of speed, resolution, power, and silicon area.
A sigma-delta ADC takes a fundamentally different approach: it samples the input at a very high rate (megahertz range) using a simple 1-bit comparator, then uses a digital decimation filter to convert the dense 1-bit stream into high-resolution output (16-24 bits) at a much lower effective sample rate. The oversampling and noise-shaping feedback loop push quantization noise out of the signal band, achieving exceptional SNR. Sigma-delta ADCs excel at precision measurement — load cells, strain gauges, thermocouples, audio — where resolution and noise performance matter more than speed.
The practical trade-offs drive architecture selection: choose SAR when you need speed, low latency, or multiplexed channels (SAR settles instantly when switching channels). Choose sigma-delta when you need high resolution (more than 16 bits) and can tolerate higher latency. Sigma-delta ADCs have a critical limitation for multiplexed systems: the digital decimation filter must settle after each channel change, which takes multiple conversion periods. Switching channels rapidly on a sigma-delta ADC produces inaccurate readings because the filter output still reflects the previous channel. This settling time penalty makes sigma-delta poorly suited for rapid multi-channel scanning — a common interview pitfall where candidates suggest sigma-delta for a multi-sensor system without considering the channel-switching overhead.
QWhat is ENOB, and why is it more meaningful than the advertised resolution?
ENOB (Effective Number of Bits) measures the ADC's actual resolution including all real-world error sources: thermal noise, quantization noise, harmonic distortion, INL (Integral Non-Linearity), DNL (Differential Non-Linearity), offset error, and gain error. It is calculated from the SINAD (Signal-to-Noise-and-Distortion ratio) of a full-scale sine wave test:
ENOB = (SINAD_dB - 1.76) / 6.02
A 12-bit ADC might advertise 12 bits of resolution but have an ENOB of only 10.5 bits, meaning the bottom 1.5 bits of every reading are dominated by noise and distortion. The advertised resolution (12 bits) is simply the number of digital output codes the converter produces — it says nothing about whether those codes are accurate. Two 12-bit ADCs from different manufacturers can have dramatically different ENOB values depending on their analog front-end design, reference quality, and layout.
ENOB matters for system design because it tells you the usable precision. If your application requires 12 bits of real accuracy (for example, a 0.05% measurement accuracy requirement translates to roughly 11 bits), a 12-bit ADC with 10.5 ENOB will not meet the spec — you need either a better-quality 12-bit ADC, a 14-bit ADC (whose ENOB might be 12+), or oversampling to recover the missing bits. Always check ENOB in the datasheet's electrical characteristics tables, not the marketing headline. Also note that ENOB degrades with input frequency (analog bandwidth limitations) and temperature — the datasheet ENOB is typically specified at a specific input frequency and temperature.
Practical Design
QWhy is the ADC reference voltage (V_REF) critical, and what can go wrong?
The reference voltage is the ADC's measurement ruler — every conversion result is a ratio of the input voltage to V_REF. If V_REF is noisy by 10 mV, every measurement inherits that 10 mV uncertainty regardless of the ADC's resolution. A 1% error in V_REF produces a 1% error in every reading. No amount of averaging, oversampling, or calibration in the digital domain can fix a noisy reference — the error is baked into every sample.
Common problems in embedded designs: (1) Using VDD directly as V_REF — on many low-cost boards, VREF+ is tied to the 3.3V supply rail, which carries 50-100 mV of switching noise from the voltage regulator, load transients from the MCU's digital core, and ripple from the power source. This noise directly limits the usable ADC resolution to perhaps 8-9 effective bits even on a 12-bit converter. (2) Poor decoupling — even a dedicated VREF pin needs proper bypassing: a 100 nF ceramic capacitor for high-frequency noise and a 1 uF capacitor for bulk energy storage, both placed as close to the pin as physically possible. (3) Temperature drift — cheap voltage references drift significantly with temperature; precision applications need references with low tempco, such as 10 ppm/degC or better. A 50 ppm/degC reference on a 3.3V rail drifts by 0.165 mV per degree — nearly invisible at 10-bit resolution but significant at 12+ bits.
A useful trick: ratiometric measurement. If the sensor is powered from the same reference as the ADC (for example, a potentiometer connected between VREF and GND), V_REF errors cancel out in the ratio. The ADC reads V_sensor / V_REF, and since V_sensor is proportional to V_REF, the reference voltage drops out of the equation. This eliminates reference accuracy as an error source entirely and is widely used for resistive sensors, potentiometers, and bridge circuits.
QWhat is signal conditioning, and why is it needed before the ADC input?
Signal conditioning is the analog processing chain between a raw sensor output and the ADC input pin. It transforms the sensor signal into a form the ADC can accurately digitize. Without proper conditioning, even a perfect ADC produces garbage results because the raw signal violates one or more of the ADC's input requirements.
Voltage scaling and level shifting: Many sensors produce signals outside the ADC's input range (0 to V_REF). A 0-10V industrial sensor needs a resistive voltage divider to scale it to 0-3.3V. A thermocouple producing +/- 50 mV needs both amplification (gain of 60x to fill the ADC range) and level shifting (add a DC offset so the negative voltages map to positive ADC input). A 4-20 mA current loop sensor needs a precision sense resistor (e.g., 165 ohm for a 0.66-3.3V output) to convert current to voltage. Getting the scaling wrong can either clip the signal (losing data at the extremes) or under-utilize the ADC range (wasting resolution on unused voltage span).
Anti-aliasing filtering: A low-pass filter (passive RC or active with an op-amp) removes high-frequency noise and prevents aliasing, as required by the Nyquist theorem. The filter cutoff should be set below f_sample / 2, with enough attenuation at the Nyquist frequency to suppress aliases below 1 LSB.
Impedance matching: The ADC's sample-and-hold capacitor (typically 5-20 pF on STM32) must charge fully during the sampling window. If the source impedance is too high, the capacitor cannot settle to the correct voltage in time, causing measurement errors that worsen at higher sample rates. The STM32 datasheet specifies a maximum recommended source impedance (typically 10-50 kohm, depending on sampling time configuration). High-impedance sensors like thermistors or piezo elements need a buffer op-amp (unity-gain follower) to present a low-impedance source to the ADC. This is one of the most commonly overlooked design errors in student projects — the ADC "sort of works" but readings drift or depend on sample rate because the source impedance is marginal.
DMA and Modes
QWhy is using DMA with the ADC important, and how do you set it up?
Without DMA, the CPU must read each ADC conversion result by polling the EOC (End of Conversion) flag or responding to an interrupt. At high sample rates — say 1 Msps on an STM32F4 — this means a million interrupts per second. Each interrupt requires context save/restore, flag checking, data copy, and return, consuming roughly 30-50 clock cycles. At 168 MHz, a million 40-cycle ISRs consume 24% of the CPU — and that is before any processing of the data. At lower-cost MCUs with slower clocks, the overhead is proportionally worse and can consume 100% of the CPU, leaving no time for the application.
DMA solves this by transferring ADC results directly from the ADC data register to a RAM buffer without any CPU involvement. The typical setup is: configure the ADC in continuous conversion mode (or timer-triggered mode for a precise, jitter-free sample rate), enable DMA requests on EOC, and configure the DMA channel with source = ADC_DR (peripheral address, no increment), destination = RAM buffer (memory address, auto-increment), transfer width = half-word (16-bit for a 12-bit ADC), and circular mode so the DMA wraps to the buffer start when it reaches the end. For multi-channel scan mode, the DMA stores each channel's result sequentially in the buffer — element 0 is channel 1, element 1 is channel 2, and so on.
Enable the Half-Transfer (HT) and Transfer-Complete (TC) interrupts to implement double-buffering: when HT fires, process the first half of the buffer while DMA fills the second half. When TC fires, process the second half while DMA wraps around and fills the first half. This gives the CPU a full half-buffer's worth of time to process data, and the interrupt rate drops from once-per-sample to twice-per-buffer. With a 256-sample buffer, that is 2 interrupts instead of 256 — a 128x reduction in interrupt overhead.
// DMA + ADC circular buffer setup (STM32 HAL pseudocode)uint16_t adc_buf[256]; // Half-word buffer, 256 samplesHAL_ADC_Start_DMA(&hadc1, (uint32_t *)adc_buf, 256);void HAL_ADC_ConvHalfCpltCallback(ADC_HandleTypeDef *hadc) {process_samples(adc_buf, 128); // First half ready}void HAL_ADC_ConvCpltCallback(ADC_HandleTypeDef *hadc) {process_samples(adc_buf + 128, 128); // Second half ready}
QWhat are the different ADC conversion modes, and when do you use each?
Single conversion mode: The ADC performs one conversion on one channel, then stops and waits. Software must explicitly trigger each new conversion. This is the simplest mode and is appropriate for infrequent, on-demand measurements — reading a battery voltage once per second, checking a potentiometer position when a button is pressed, or performing a one-time calibration measurement. The advantage is simplicity and zero background resource usage; the ADC is idle between triggers.
Continuous conversion mode: After completing one conversion, the ADC immediately starts the next, running back-to-back as fast as the ADC clock allows. The data register is continuously updated with the latest result. This mode is best for monitoring a single channel at maximum speed, and is almost always paired with DMA (because the CPU cannot reliably poll fast enough to catch every result without overrun). Continuous mode is the workhorse for real-time signal processing — audio sampling, vibration analysis, or current sensing in motor control.
Scan mode: The ADC sequentially converts a configured list of channels (the scan sequence or regular group). After completing all channels in the sequence, it either stops (single scan) or restarts (continuous scan). This is the standard mode for multi-sensor systems: read temperature on channel 3, current on channel 5, and voltage on channel 8 in a fixed sequence. Combined with DMA, the results land in an array where adc_buf[0] is always temperature, adc_buf[1] is always current, and adc_buf[2] is always voltage — clean and deterministic.
Discontinuous mode: Converts a configurable subset of the scan sequence per trigger — for example, 3 channels per trigger out of a 9-channel sequence, requiring 3 triggers to complete one full scan. This is a niche mode used when you need precise, trigger-synchronized timing for each sub-group but have more channels than can be converted within the available trigger window. It is most common in interleaved ADC motor control schemes where different phase currents must be sampled at specific PWM timing points.
QHow do you handle ADC measurements for a sensor that has a nonlinear response?
Many sensors — NTC thermistors, photodiodes, pH probes, gas sensors — have inherently nonlinear voltage-to-measurement transfer functions. The ADC faithfully digitizes the voltage, but converting that voltage to a meaningful physical quantity (temperature, light intensity, pH) requires linearization in software.
Lookup table with interpolation (most common in embedded): Store a table of ADC values and corresponding physical measurements at known calibration points, then use linear interpolation between adjacent entries for intermediate values. For an NTC thermistor, a 20-30 entry table spanning -40 to +125 degrees C with interpolation provides better than 0.1-degree accuracy across the full range. The computation is a single integer subtract, multiply, and divide — no floating point required. This approach handles arbitrary nonlinearities, uses modest memory (60-120 bytes for a 30-entry 16-bit table), and executes in deterministic time.
Mathematical model: Apply the sensor's characteristic equation directly. For NTC thermistors, the Steinhart-Hart equation (1/T = A + B*ln(R) + C*(ln(R))^3) provides excellent accuracy with just three calibration coefficients. For RTD sensors, the Callendar-Van Dusen equation is standard. The advantage is minimal memory usage (just the coefficients), but the disadvantage is significant: these equations require floating-point logarithms and divisions, which can take hundreds of microseconds on Cortex-M0/M3 cores without an FPU. On Cortex-M4F or M7 with hardware FPU, the computational cost is acceptable.
Piecewise linear approximation: Divide the sensor range into segments and apply a different linear equation (slope + offset) per segment. This is a middle ground — simpler than full interpolation, more memory-efficient than a dense LUT, and avoids floating-point math. The segment boundaries should be placed where the nonlinearity is strongest (the curve changes direction most rapidly) to minimize error. Always validate the linearization against known reference points during production calibration, not just against datasheet typical curves.