Search topics...
Coding
21 questions
Comprehensive

Bit Manipulation Interview Problems for Embedded Engineers

Bit manipulation problems: power of four, range bitwise AND, XOR in subarray, single number, set/clear bits, endianness, and common embedded bit tricks.

Bit manipulation is the most embedded-specific coding topic. You use bitwise operations daily for register configuration, protocol field packing, flag management, and hardware control. Interviewers expect embedded candidates to be fluent with masks, shifts, and XOR tricks.

LeetCode Problems

#ProblemDifficultyFrequency
1Number ComplementEasy
2Is Power of FourEasy
3Number of 1 Bits (Hamming Weight)Easy
4Hamming DistanceEasy
5Missing NumberEasy
6Single NumberEasy
7Sort Integers by Number of 1 BitsEasy
8Add BinaryEasy
9Convert Number to HexadecimalEasy
10XOR Operation in an ArrayEasy
11Range Bitwise ANDMedium
12Bitwise XOR in a SubarrayMedium
13Single Number IIMedium
14Single Number IIIMedium
15Total Hamming DistanceMedium
16Sum of Two IntegersMedium
17UTF-8 ValidationMedium
18SubsetsMedium
19Repeated DNA SequencesMedium
20Max Length Concatenated StringMedium
21Divide Two IntegersMedium

Embedded Bit Manipulation Essentials

OperationTechniqueCode
Count set bitsBrian Kernighan's algorithmn &= (n-1) in a loop — loops = set bit count
Power of 2 checkSingle bit set detection(x != 0) && !(x & (x - 1))
Swap without tempXOR swapx ^= y; y ^= x; x ^= y
Invert all bitsXOR with all-ones maskx ^ 0xFF (for byte)
Set bit nOR with shifted 1x | (1 << n)
Clear bit nAND with inverted maskx & ~(1 << n)
Toggle bit nXOR with shifted 1x ^ (1 << n)
Read bit nAND and shift(x >> n) & 1
Endianness swap (32-bit)Byte reversal__builtin_bswap32(x) or manual shift+mask
Detect endianness at runtimeUnion or pointer castCheck if first byte of 0x01 is 0x01 (little) or 0x00 (big)

Implementation

Number Complement Easy

Problem: Given a positive integer, return its complement by flipping all bits in its binary representation.

▶ Practice this problem

QSolution — O(log n) time, O(1) space

Key Insight: Instead of building a full-width mask, XOR each bit position individually. Walking through positions from LSB upward and XORing with 1 flips each significant bit while naturally ignoring leading zeros (the loop stops when the shifted copy reaches zero).

Thought Process:

  1. Brute force: find the bit width, create a mask of all 1s that width, XOR with the number — O(log n).
  2. Can we avoid computing the width separately? Yes — walk each bit position from LSB, XOR with 1 at that position, and track when we've processed all significant bits.
  3. Use a copy of the number, shifting right each iteration. When it reaches zero, we've flipped all significant bits.

Time Complexity: O(log n) — one iteration per bit in the number's binary representation. Space Complexity: O(1) — two integer variables.

cpp
class Solution {
public:
int findComplement(int num) {
int bit = 1, tp = num;
while (tp) {
num ^= bit;
bit <<= 1;
tp >>= 1;
}
return num;
}
};

Embedded relevance: Bit complement operations are used constantly for creating inverted masks in register programming — e.g., clearing specific bits in a control register while preserving others.


Is Power of Four Easy

Problem: Given an integer, determine whether it is a power of four.

▶ Practice this problem

QSolution — O(1) time, O(1) space

Key Insight: A power of four is a power of two whose single set bit falls at an even position (bit 0, 2, 4, ...). The mask 0x55555555 has 1s at all even positions, so AND-ing with it confirms the bit is in the right place.

Thought Process:

  1. Brute force: repeatedly divide by 4, check if we reach 1 — O(log n).
  2. Optimization: a power of four must first be a power of two — check with n & (n-1) == 0.
  3. But powers of 2 at odd bit positions (like 2, 8, 32) aren't powers of 4. Filter with 0x55555555 — a mask with 1s only at even bit positions.
  4. Three conditions combined give O(1): positive, single set bit, bit at even position.

Time Complexity: O(1) — three bitwise operations. Space Complexity: O(1) — no extra storage.

cpp
class Solution {
public:
bool isPowerOfFour(int num) {
return num > 0 && ((num & (num - 1)) == 0) && (0x55555555 & num);
}
};

Embedded relevance: Power-of-two and power-of-four checks appear when validating DMA buffer alignment constraints — buffers often must be aligned to powers of 2 or 4 for hardware transfer.


Number of 1 Bits (Hamming Weight) Easy

Problem: Given a positive integer, return the number of set bits in its binary representation.

▶ Practice this problem

QSolution — O(k) time, O(1) space

Key Insight: The naive approach checks all 32 bits. But Brian Kernighan discovered that n & (n - 1) always clears the lowest set bit. Why? Subtracting 1 flips all bits from the lowest set bit downward — AND-ing with the original zeroes out exactly that bit.

Thought Process:

  1. The brute force is to loop through all 32 bits and check each one — O(32).
  2. Can we do better? If the number only has 3 set bits, we're wasting 29 iterations.
  3. Kernighan's trick: n &= (n-1) clears exactly one set bit per iteration, so the loop runs only k times where k = number of set bits.

Time Complexity: O(k) where k = number of set bits (at most 32, but often much less). Space Complexity: O(1) — single counter variable.

cpp
class Solution {
public:
int hammingWeight(uint32_t n) {
int count = 0;
while (n) {
n &= (n - 1);
count++;
}
return count;
}
};

Embedded relevance: Popcount is used in interrupt priority arbitration (counting pending interrupts), error detection (parity checks), and resource allocation (counting available channels in a bitmask).


Hamming Distance Easy

Problem: Given two integers x and y, return the number of positions at which the corresponding bits are different.

▶ Practice this problem

QSolution — O(k) time, O(1) space

Key Insight: This problem decomposes into two steps you already know: (1) find which bits differ, (2) count them. XOR gives you a 1 everywhere the bits differ and a 0 where they match — that's step 1. Then step 2 is just "Number of 1 Bits" (Hamming Weight).

Thought Process:

  1. How do I identify differing bit positions? XOR — x ^ y produces 1s exactly where x and y disagree.
  2. How do I count those 1s? Brian Kernighan's trick from the previous problem.
  3. Two well-known primitives composed together — this is the pattern interviewers want to see.

Time Complexity: O(k) where k = number of differing bits. Space Complexity: O(1).

cpp
class Solution {
public:
int hammingDistance(int x, int y) {
int xorVal = x ^ y;
int count = 0;
while (xorVal) {
xorVal &= (xorVal - 1);
count++;
}
return count;
}
};

Embedded relevance: Hamming distance is foundational in error-correcting codes (ECC) used in flash memory controllers and communication protocols to detect and correct bit errors.


Missing Number Easy

Problem: Given an array containing n distinct numbers in the range [0, n], return the missing number.

▶ Practice this problem

QSolution — O(n) time, O(1) space

Key Insight: There are two classic approaches: (1) sum formula n*(n+1)/2 - sum(array), and (2) XOR cancellation. The sum approach risks integer overflow for large n. The XOR approach is overflow-safe because XOR never grows the value.

Thought Process:

  1. We need to find one missing element from a known complete range — this is a "find the unpaired" problem.
  2. XOR all indices 0 through n, then XOR all array values. Every number that appears in both cancels out (a ^ a = 0), leaving only the missing one.
  3. Think of it as: we have two sets that differ by exactly one element. XOR-ing all elements of both sets together isolates the difference.

Time Complexity: O(n) — two passes, one over indices, one over array. Space Complexity: O(1) — single accumulator variable.

cpp
class Solution {
public:
int missingNumber(vector<int>& nums) {
int xorAll = 0;
int n = nums.size();
for (int i = 0; i <= n; i++) xorAll ^= i;
for (int num : nums) xorAll ^= num;
return xorAll;
}
};

Embedded relevance: XOR-based checksums detect missing or corrupted data in packet sequences — the same principle used here to find the missing element.


Single Number Easy

Problem: Every element appears twice except for one. Find that single one. Must run in O(n) time with O(1) space.

▶ Practice this problem

QSolution — O(n) time, O(1) space

Key Insight: XOR has two critical properties: (1) a ^ a = 0 — anything XORed with itself vanishes, and (2) a ^ 0 = a — anything XORed with zero stays. If you XOR all elements together, every pair cancels out, leaving only the unpaired element.

Thought Process:

  1. Hash map approach: count frequencies, return the one with count 1 — but that's O(n) space.
  2. Sort and compare neighbors — O(n log n) time.
  3. XOR trick: since every element appears exactly twice except one, XOR exploits the self-canceling property. No extra space, single pass.
  4. This is the foundational XOR pattern — once you see "appears twice except one," XOR should be your first instinct.

Time Complexity: O(n) — single pass. Space Complexity: O(1).

cpp
class Solution {
public:
int singleNumber(vector<int>& nums) {
int result = 0;
for (int num : nums) result ^= num;
return result;
}
};

Embedded relevance: This XOR identity is used in RAID parity computation — XOR all data drives to produce a parity drive that can reconstruct any single failed drive.


Sort Integers by Number of 1 Bits Easy

Problem: Sort an array in ascending order by the number of set bits. Ties broken by value.

▶ Practice this problem

QSolution — O(n log n) time, O(1) space

Key Insight: This is a custom sorting problem. You need a comparator that first compares popcount, then value. The interesting part is how to count bits efficiently — you can use Kernighan's trick, a lookup table, or the compiler builtin __builtin_popcount.

Thought Process:

  1. This is just std::sort with a custom comparator — the algorithm choice is straightforward.
  2. The real question is efficiency of the popcount inside the comparator, since it's called O(n log n) times.
  3. __builtin_popcount compiles to a single hardware instruction on most modern CPUs — use it when available.
  4. If builtins aren't allowed, use Kernighan's trick or a nibble lookup table.

Time Complexity: O(n log n) for sorting. Each comparison is O(1) with builtin popcount. Space Complexity: O(1) excluding sort internals.

cpp
class Solution {
public:
vector<int> sortByBits(vector<int>& arr) {
sort(arr.begin(), arr.end(), [](int a, int b) {
int bitsA = __builtin_popcount(a);
int bitsB = __builtin_popcount(b);
return bitsA == bitsB ? a < b : bitsA < bitsB;
});
return arr;
}
};

Embedded relevance: Hardware popcount instructions (POPCNT on x86, VCNT on ARM NEON) are single-cycle — knowing when to use builtins vs. manual loops matters for performance-critical firmware.


Add Binary Easy

Problem: Given two binary strings, return their sum as a binary string.

▶ Practice this problem

QSolution — O(max(m,n)) time, O(1) space

Key Insight: This is a software simulation of a hardware full adder. At each bit position, three inputs produce two outputs: sum = A XOR B XOR carry_in and carry_out = (A AND B) OR ((A XOR B) AND carry_in). These are the exact Boolean equations of a full adder circuit.

Thought Process:

  1. Process both strings from right to left (LSB first), just like manual binary addition.
  2. At each position: add the two bits plus any carry from the previous position.
  3. Handle unequal string lengths by treating missing positions as 0.
  4. Don't forget the final carry — "1" + "1" = "10" needs an extra digit.
  5. Build the result in reverse order, then reverse at the end (or use insert at front, but that's O(n) per insert).

Time Complexity: O(max(m, n)) — single pass through both strings. Space Complexity: O(1) extra — modifies the larger input string in place.

cpp
class Solution {
public:
string addBinary(string a, string b) {
string& res = (a.size() >= b.size()) ? a : b;
int i = a.size() - 1, j = b.size() - 1, k = res.size() - 1;
int carry = 0;
while (k >= 0) {
int bitA = i < 0 ? 0 : a[i] & 1;
int bitB = j < 0 ? 0 : b[j] & 1;
int sum = (bitA ^ bitB) ^ carry;
carry = ((bitA ^ bitB) & carry) | (bitA & bitB);
res[k] = sum + '0';
i--; j--; k--;
}
if (carry) res.insert(res.begin(), '1');
return res;
}
};

Embedded relevance: This is a software implementation of the ripple-carry adder circuit — the fundamental building block of ALU design.


Convert Number to Hexadecimal Easy

Problem: Convert a 32-bit integer to hexadecimal. Handle negatives via two's complement. No built-in library.

▶ Practice this problem

QSolution — O(1) time, O(1) space

Key Insight: Each hex digit represents exactly 4 bits. So to convert, repeatedly extract the lowest 4 bits (& 0xf), map to a hex character, then right-shift by 4. The key trick for negative numbers: cast to unsigned int — this reinterprets the bit pattern without changing it, so two's complement representation is preserved naturally.

Thought Process:

  1. How does hex relate to binary? Each hex digit = 4 bits. So extracting hex digits is just grouping bits in nibbles.
  2. Extract the lowest nibble: n & 0xf gives a value 0-15. Map to '0'-'9' or 'a'-'f' via a lookup string.
  3. Shift right by 4 to move to the next nibble. Repeat until n is 0.
  4. Negative numbers: -1 in two's complement is 0xFFFFFFFF. Casting to unsigned int gives us this representation directly.
  5. Result is built LSB-first, so reverse at the end.

Time Complexity: O(1) — at most 8 hex digits for a 32-bit integer. Space Complexity: O(1).

cpp
class Solution {
public:
string toHex(int num) {
if (!num) return "0";
unsigned int n = num;
string hex = "0123456789abcdef";
string result;
while (n) {
result += hex[n & 0xf];
n >>= 4;
}
reverse(result.begin(), result.end());
return result;
}
};

Embedded relevance: Hex conversion appears in every debug printf — printing register values, memory dumps, and protocol bytes. The & 0xf / >> 4 pattern is in every embedded printf implementation.


XOR Operation in an Array Easy

Problem: Given n and start, compute XOR of all elements where nums[i] = start + 2*i.

▶ Practice this problem

QSolution — O(n) time, O(1) space

Thought Process:

  1. The straightforward approach: generate each element and XOR it into an accumulator. No array needed since we only need the running XOR.
  2. An O(1) solution exists using the prefix XOR pattern for consecutive integers, but the constraints (n ≤ 1000) make the simple loop perfectly fine.
  3. In an interview, mention the O(1) possibility but implement the O(n) version — it's clearer and shows you know when optimization isn't needed.

Time Complexity: O(n). Space Complexity: O(1).

cpp
class Solution {
public:
int xorOperation(int n, int start) {
int result = 0;
for (int i = 0; i < n; i++) result ^= start + 2 * i;
return result;
}
};

Range Bitwise AND Medium

Problem: Given two integers left and right representing a range [left, right], return the bitwise AND of all numbers in this range.

▶ Practice this problem

QSolution — O(log n) time, O(1) space

Key Insight: AND-ing all numbers in a range preserves only the common high-bit prefix of the endpoints. Any bit position where the endpoints differ will have both 0 and 1 somewhere in the range, zeroing that position in the AND result.

Thought Process:

  1. Brute force: AND all numbers from left to right — O(n) where n = range size. Way too slow for large ranges.
  2. Observe: if any bit position differs between left and right, some number in the range has a 0 there, making the AND zero for that bit.
  3. So the answer is just the common prefix of left and right in binary, with the differing suffix zeroed out.
  4. Use Brian Kernighan's trick: n = n & (n-1) strips the lowest set bit from n. Repeat until n <= m — what remains is the common prefix.

Time Complexity: O(log n) — at most 32 iterations for 32-bit integers. Space Complexity: O(1).

cpp
class Solution {
public:
int rangeBitwiseAnd(int m, int n) {
while (m < n) {
n = n & (n - 1);
}
return n;
}
};

Embedded relevance: Common prefix extraction is used in IP subnet masking — finding the network address by AND-ing an IP with its subnet mask uses the same principle.


Bitwise XOR in a Subarray Medium

Problem: Given an array of integers and a list of queries [left, right], return the XOR of elements from index left to right for each query.

▶ Practice this problem

QSolution — O(n + q) time, O(1) extra space

Key Insight: XOR has a self-canceling property: a ^ a = 0. A prefix XOR array lets you compute the XOR of any subarray in O(1) — prefix[j] ^ prefix[i-1] cancels all elements before index i, leaving only arr[i] ^ ... ^ arr[j].

Thought Process:

  1. Brute force: for each query, iterate from left to right XOR-ing elements — O(q × n) total.
  2. Can we precompute something? Yes — prefix sums work for range sum queries; prefix XOR works the same way for range XOR queries.
  3. Build prefix[i] = arr[0] ^ arr[1] ^ ... ^ arr[i] in one pass. Then XOR(l, r) = prefix[r] ^ prefix[l-1].
  4. Handle edge case: when l = 0, the answer is just prefix[r] (no subtraction needed).

Time Complexity: O(n + q) — O(n) to build prefix array, O(1) per query. Space Complexity: O(1) extra — modifies input array in place as prefix array.

cpp
class Solution {
public:
vector<int> xorQueries(vector<int>& arr, vector<vector<int>>& queries) {
vector<int> res;
for (int i = 1; i < arr.size(); i++) {
arr[i] ^= arr[i - 1];
}
for (auto& v : queries) {
if (v[0] == 0)
res.push_back(arr[v[1]]);
else
res.push_back(arr[v[1]] ^ arr[v[0] - 1]);
}
return res;
}
};

Embedded relevance: Prefix XOR is the technique behind CRC and checksum verification — computing the XOR of a data range to detect transmission errors in serial protocols.


Single Number II Medium

Problem: Given an integer array where every element appears three times except for one which appears exactly once, find the single element.

▶ Practice this problem

QSolution — O(n) time, O(1) space

Key Insight: When every element appears k times except one, counting the number of set bits at each position modulo k isolates the unique element's bits. For k=3, if the count mod 3 is 1, the unique number has that bit set.

Thought Process:

  1. Hash map approach: count frequencies, return the one with count 1 — O(n) time but O(n) space.
  2. Sort and scan: find the element not in a group of 3 — O(n log n) time, O(1) space.
  3. Bit counting: for each of the 32 bit positions, sum how many numbers have that bit set. Mod 3 removes the contribution of every triple, leaving only the unique element's bit.
  4. This generalizes: for "appears k times except one," just use mod k instead of mod 3.

Time Complexity: O(32 × n) which simplifies to O(n) — examine each bit position for every number. Space Complexity: O(1) — two loop counters and result variable.

cpp
class Solution {
public:
int singleNumber(vector<int>& nums) {
int res = 0;
for (int i = 0; i < 32; ++i) {
int sum = 0;
for (int j = 0; j < nums.size(); ++j) {
sum += (nums[j] >> i) & 1;
}
res |= (sum % 3) << i;
}
return res;
}
};

Embedded relevance: Per-bit-position counting is used in voting-based fault tolerance — determining the majority value at each bit across redundant sensor channels to mask transient errors.


Single Number III Medium

Problem: Exactly two elements appear once, all others appear twice. Find both unique elements in O(n) time, O(1) space.

▶ Practice this problem

QSolution — O(n) time, O(1) space

Key Insight: XOR-ing everything gives a ^ b, not the individual values. You need a way to separate a and b into different groups. Since a != b, at least one bit differs between them — use that bit to partition the array so a and b land in different partitions. Then XOR each partition separately.

Thought Process:

  1. XOR all elements → get a ^ b. But how to extract a and b individually?
  2. a ^ b has 1s where a and b differ. Pick any such bit — the lowest one is easiest: diff & (-diff).
  3. Split the array into "has this bit set" and "doesn't have this bit set." Each paired element falls into the same group (both have the bit set or both don't). But a and b fall into different groups.
  4. XOR each group → get a and b separately. This is the "split the problem in half" insight.

Time Complexity: O(n) — two passes over the array. Space Complexity: O(1).

cpp
class Solution {
public:
vector<int> singleNumber(vector<int>& nums) {
int xorAll = 0;
for (int num : nums) xorAll ^= num;
int diffBit = xorAll & (-xorAll);
vector<int> result = {0, 0};
for (int num : nums) {
if (num & diffBit) result[1] ^= num;
else result[0] ^= num;
}
return result;
}
};

Embedded relevance: The x & (-x) isolate-lowest-bit trick appears in hardware priority encoders — finding the lowest pending interrupt in a bitmask register.


Total Hamming Distance Medium

Problem: Given an integer array, return the sum of Hamming distances between all pairs.

▶ Practice this problem

QSolution — O(32n) ≈ O(n) time, O(1) space

Key Insight: The brute force compares every pair — O(n²). The trick is to think per bit position instead of per pair. At each of the 32 bit positions, some numbers have that bit set and some don't. Every (set, unset) pair contributes 1 to the total distance at that position.

Thought Process:

  1. Brute force: for each pair (i, j), compute popcount(nums[i] ^ nums[j]). That's O(n² × 32) — too slow.
  2. Decompose by bit position: at bit k, count how many numbers have it set (c) and how many don't (n - c).
  3. The number of pairs that differ at bit k is c × (n - c) (combinatorics — pick one from each group).
  4. Sum across all 32 positions. Each number is examined 32 times, so total work is O(32n) = O(n).

Time Complexity: O(32 × n) which simplifies to O(n). Space Complexity: O(1).

cpp
class Solution {
public:
int totalHammingDistance(vector<int>& nums) {
int n = nums.size(), total = 0;
for (int bit = 0; bit < 32; bit++) {
int ones = 0;
for (int k = 0; k < n; k++) ones += (nums[k] >> bit) & 1;
total += ones * (n - ones);
}
return total;
}
};

Embedded relevance: Per-bit-position counting is the same technique used in voting-based fault tolerance — determining the majority value at each bit position across redundant sensor readings.


Sum of Two Integers Medium

Problem: Return the sum of two integers without using + or - operators.

▶ Practice this problem

QSolution — O(1) time, O(1) space

Key Insight: Think about how a hardware adder works. A half adder computes: sum = A XOR B (the sum without carry) and carry = A AND B (the carry). A full adder chains half adders. In software, we repeat: compute sum-without-carry and carry, then "add" those together (by repeating the process) until there's no more carry.

Thought Process:

  1. We can't use + or -, so we need to think at the bit level.
  2. What does a XOR b give us? The sum of a and b ignoring carries (e.g., 1 + 1 = 0 with carry, XOR gives 0).
  3. What does (a AND b) << 1 give us? The carry bits, shifted left to the correct position.
  4. Now we need to "add" the sum and carry — but we can't use +! So we repeat the process with the new values.
  5. Eventually the carry becomes 0, and the XOR result is the final answer. This always terminates because the carry shifts left each iteration.

Time Complexity: O(1) — bounded by 32 iterations for 32-bit integers (carry shifts left each round). Space Complexity: O(1).

cpp
class Solution {
public:
int getSum(int a, int b) {
while (b != 0) {
int carry = (a & b) << 1;
a = a ^ b;
b = carry;
}
return a;
}
};

Embedded relevance: Understanding addition at the gate level (XOR for sum, AND for carry) is essential for digital logic design and debugging ALU-related silicon bugs.


UTF-8 Validation Medium

Problem: Given an array of bytes, determine if the data is a valid UTF-8 encoding.

▶ Practice this problem

QSolution — O(n) time, O(1) space

Key Insight: UTF-8 is a state machine. You're either (a) reading a start byte (which tells you how many continuation bytes follow) or (b) reading continuation bytes (which must match 10xxxxxx). The leading bits of the start byte encode the character length — this is elegant protocol design that enables self-synchronization.

Thought Process:

  1. First, understand the UTF-8 encoding rules: leading bits of the first byte determine the character length (1-4 bytes). Continuation bytes always start with 10.
  2. Track a counter remaining = how many continuation bytes we still expect.
  3. If remaining == 0, we're reading a start byte. Check its leading bits via right-shift to determine the character length.
  4. If remaining > 0, the byte must start with 10 (check: byte >> 6 == 0b10). Decrement remaining.
  5. At the end, remaining must be 0 — otherwise we have an incomplete character.
  6. Don't forget: 5+ byte sequences (leading 11111xxx) are invalid UTF-8.

Time Complexity: O(n) — single pass through the data. Space Complexity: O(1) — single counter.

cpp
class Solution {
public:
bool validUtf8(vector<int>& data) {
int remaining = 0;
for (int byte : data) {
if (remaining == 0) {
if ((byte >> 5) == 0b110) remaining = 1;
else if ((byte >> 4) == 0b1110) remaining = 2;
else if ((byte >> 3) == 0b11110) remaining = 3;
else if ((byte >> 7)) return false;
} else {
if ((byte >> 6) != 0b10) return false;
remaining--;
}
}
return remaining == 0;
}
};

Embedded relevance: UTF-8 validation is critical in IoT devices receiving text over UART/BLE — malformed sequences can crash display firmware or cause buffer overflows.


Subsets Medium

Problem: Given an array of unique elements, return all possible subsets (the power set).

▶ Practice this problem

QSolution — O(n × 2^n) time, O(n × 2^n) space

Key Insight: There's a beautiful mapping between integers and subsets. For n elements, there are 2^n subsets. Each integer from 0 to 2^n-1, when written in binary, tells you which elements to include — bit j set means include nums[j]. This is called bitmask enumeration.

Thought Process:

  1. Backtracking approach: at each element, choose to include or exclude it. This gives a binary tree of depth n with 2^n leaves — each leaf is a subset.
  2. Bitmask approach: realize that the include/exclude decision at each position is a binary digit. So each subset maps to a unique n-bit number.
  3. Iterate mask from 0 to (1 << n) - 1. For each mask, check which bits are set and include the corresponding elements.
  4. Both approaches produce the same result. Bitmask is more "embedded-flavored" and often cleaner to implement.

Time Complexity: O(n × 2^n) — 2^n subsets, each up to size n. Space Complexity: O(n × 2^n) — storing all subsets.

cpp
class Solution {
public:
vector<vector<int>> subsets(vector<int>& nums) {
int n = nums.size(), total = 1 << n;
vector<vector<int>> result(total);
for (int mask = 0; mask < total; mask++) {
for (int j = 0; j < n; j++) {
if ((mask >> j) & 1) result[mask].push_back(nums[j]);
}
}
return result;
}
};

Embedded relevance: Bitmask enumeration is used in hardware configuration — testing all combinations of GPIO pin states or peripheral enable bits during validation.


Repeated DNA Sequences Medium

Problem: Find all 10-letter DNA subsequences that occur more than once.

▶ Practice this problem

QSolution — O(n) time, O(n) space

Key Insight: The naive approach hashes 10-character substrings — each hash operation is O(10). But DNA has only 4 characters, each encodable in 2 bits. A 10-character window fits in 20 bits (well within an int). Using a rolling bitmask hash, each window update is O(1).

Thought Process:

  1. Brute force: extract every 10-char substring, store in a hash set, check for duplicates. O(10n) with string hashing.
  2. Optimization: encode A=00, C=01, G=10, T=11. A 10-char window = 20 bits = fits in an int.
  3. Sliding window: to move the window right, left-shift by 2, OR in the new character, AND with 0xFFFFF to keep only 20 bits. O(1) per step.
  4. Use a frequency map instead of a set — add to result only when count reaches exactly 2, avoiding duplicates in output.

Time Complexity: O(n) — single pass with O(1) per window update. Space Complexity: O(n) — hash map storing up to n-9 window hashes.

cpp
class Solution {
public:
vector<string> findRepeatedDnaSequences(string s) {
int n = s.size();
vector<string> result;
if (n <= 10) return result;
unordered_map<int, int> freq;
auto encode = [](char c) {
if (c == 'A') return 0; if (c == 'T') return 1;
if (c == 'G') return 2; return 3;
};
int mask = 0;
for (int i = 0; i < 10; i++) mask = (mask << 2) | encode(s[i]);
freq[mask]++;
int keep = 0xFFFFF;
for (int i = 1; i <= n - 10; i++) {
mask = ((mask << 2) | encode(s[i + 9])) & keep;
if (++freq[mask] == 2) result.push_back(s.substr(i, 10));
}
return result;
}
};

Embedded relevance: Rolling hash with bit packing is used in network packet inspection — encoding fixed-width protocol fields as compact bitmasks for fast lookup.


Max Length of Concatenated String with Unique Characters Medium

Problem: Find the maximum length of a concatenation of a subsequence of strings that has all unique characters.

▶ Practice this problem

QSolution — O(2^n) time, O(2^n) space

Key Insight: Each string can be represented as a 26-bit mask (one bit per letter). Two strings can be concatenated only if their masks don't overlap ((mask1 & mask2) == 0). This turns the problem into a bitmask compatibility search.

Thought Process:

  1. First, filter out strings with internal duplicate characters (e.g., "aab") — their bit mask would have fewer bits than characters.
  2. For each remaining string, try combining it with all previously built valid combinations.
  3. Two combinations are compatible if they share no letters — check with (existing & new).any() == false.
  4. This is essentially a knapsack/subset problem with a compatibility constraint, solved via iterative DP on bitmasks.
  5. Process strings in reverse order to avoid using the same string twice in one iteration.

Time Complexity: O(2^n) — in the worst case, every string is compatible with every other. Space Complexity: O(2^n) — storing all valid bitmask combinations.

cpp
class Solution {
public:
int maxLength(vector<string>& arr) {
vector<bitset<26>> dp = {bitset<26>()};
int result = 0;
for (auto& s : arr) {
bitset<26> mask;
for (char c : s) mask.set(c - 'a');
if ((int)mask.count() < (int)s.size()) continue;
for (int i = dp.size() - 1; i >= 0; i--) {
if ((dp[i] & mask).any()) continue;
dp.push_back(dp[i] | mask);
result = max(result, (int)(dp[i].count() + mask.count()));
}
}
return result;
}
};

Divide Two Integers Medium

Problem: Divide two integers without using multiplication, division, or mod operators.

▶ Practice this problem

QSolution — O(log²n) time, O(1) space

Key Insight: Division is repeated subtraction. But subtracting the divisor one at a time is O(n/d) — way too slow. Instead, use bit shifting to double the divisor until it's as large as possible, subtract that chunk, and repeat. This is exactly how hardware division works in CPUs without a dedicated divider.

Thought Process:

  1. Brute force: subtract divisor from dividend repeatedly, count how many times. Too slow for large values.
  2. Optimization: instead of subtracting d once, find the largest k where d << k still fits in the remaining dividend. That accounts for 2^k divisions at once.
  3. Subtract d << k from the dividend, add 1 << k to the quotient. Repeat with the remainder.
  4. Edge cases are critical: INT_MIN / -1 overflows (since |INT_MIN| > INT_MAX). Use unsigned arithmetic to avoid UB.
  5. XOR the sign bits to determine result sign — cleaner than if/else chains.

Time Complexity: O(log²n) — outer loop runs O(log n) times, inner shift loop also O(log n). Space Complexity: O(1).

cpp
class Solution {
public:
int divide(int dividend, int divisor) {
if (dividend == divisor) return 1;
int sign = ((dividend < 0) ^ (divisor < 0)) ? -1 : 1;
unsigned int n = abs(dividend), d = abs(divisor);
unsigned int quotient = 0;
while (n >= d) {
int shift = 0;
while (n > (d << (shift + 1))) shift++;
n -= d << shift;
quotient += 1 << shift;
}
if (quotient == (1u << 31) && sign == 1) return INT_MAX;
return sign * quotient;
}
};

Embedded relevance: Software division via shift-and-subtract is how ARM Cortex-M0 cores (no hardware divider) implement integer division. Understanding this helps debug division performance on constrained targets.

Was this helpful?