Design
170. Two Sum III - Data structure design
Approach 1 - binary search + sorted array
class TwoSum {
private sortArr: number[];
constructor() {
this.sortArr = new Array();
}
// O(logn)
add(number: number): void {
let l = 0, r = this.sortArr.length;
while (l < r) {
const mid = l + Math.floor((r - l) / 2);
if (this.sortArr[mid] < number) {
l = mid + 1;
} else {
r = mid;
}
}
this.sortArr.splice(l, 0, number);
}
// O(n)
find(value: number): boolean {
let l = 0, r = this.sortArr.length - 1;
while (l < r) {
const sum = this.sortArr[l] + this.sortArr[r];
if (sum === value) return true;
if (sum < value) l++;
else r--;
}
// Check if value is twice a number that appears at least twice
for (let i = 1; i < this.sortArr.length; i++) {
if (this.sortArr[i] === this.sortArr[i - 1] && this.sortArr[i] * 2 === value) {
return true;
}
}
return false;
}
}Approach 2: HashMap / LinkedHashMap
Use a map to store the last time the log is printed.
Use a list to store all numbers, use Hashmap <number, it is position> to find the position of each number.
insert: add val to the list and map
remove: check if the val is the last element of the list. If yes, simply remove the last element from list and map. Otherwise, replace val with the last element in the list, and remove the last element.
getRandom:
return list.get(rand.nextInt(list.size()));
Use Map<Integer, Set<Integer>> to store val and corresponding positions. When remove, random pick one from the current val set by using iterator, swap with the last position.
When new value <= min, add min to the stack then add new value. When pop, check if the popped element equals to min, if yes, pop again.
Use an array of size k to store the elements, a start and end to set the boundary and size to maintain the queue size.
Clarifications:
For simplicity, are the keys integers only?
For collision resolution, can we use chaining?
Do we have to worry about load factors?
Can we assume inputs are valid or do we have to validate them?
Can we assume this fits memory?
Method: Use a list of buckets to store the node. Each bucket is a linked-list:
int getIndex(int key): get index in the list based on the provide key
ListNode getPrev(int index, int key): get the last node with the right key for insertion or deletion.
Follow-up:
What if there are too many nodes in one bucket? Load factor + rehashing or use Red-Black tree on each node
What if there key is a String? Multiply ASCII value of each character and MOD by 31 to get the index.
Solution1: two stacks
Solution2: Use a LinkedList and TreeMap to store value.
This is similar to LFU. Each bucket is has a pre bucket and a next buck, also has a keySet to store all keys in the frequency, and the frequency itself. Build two maps:
Key: Count Value: Bucket
Key: String Key Value: Count
BinarySearch version
TreeMap version
Store all seats in a TreeMap, loop over it to detect the largest gap.
Mine Sweeper
Config support Cache
Design a simple caching mechanism that supports configuration. An example of configuration would be the ability to provide different eviction policies, e.g. least recently used, least frequently used, most memory consuming, etc. Another example of configuration might be to specify the type of storage used, e.g. local heap, local disc, remote, etc.
1396. Design Underground System
Problem: Logs LiveTail Matcher
You're implementing a "Live Tail" feature similar to Datadog, which:
Lets users enter queries (like search terms).
Continuously ingests logs as they appear.
Matches incoming logs against previously submitted queries and emits results.
🧾 Input
You're given a list of strings in the following format:
Lines that begin with
"Q: "are queries.Each query is a list of space-separated words.
All words in the query must match a log for it to be considered a match.
Matching is case-insensitive and only for whole words.
Each query is assigned a unique ID (starting from 1).
Lines that begin with
"L: "are log lines.A log is checked against all previous queries.
If a log matches any query, output a match result with the matching query ID(s).
Output
Produce a list of result strings:
For each query, output:
For each log that matches one or more queries:
📌 Matching Rules
Matching is case-insensitive
All words in a query must be present in the log (as whole words)
Word comparison is based on whitespace tokenization
If no queries match a log, the log is ignored
🔢 Example
Input:
Output:
Last updated
Was this helpful?