MongoDB CRUD Operations Explained — Patterns, Pitfalls and Real-World Usage
Every application that stores data — whether it's a social media feed, an e-commerce cart, or a hospital records system — needs to talk to a database. MongoDB has become the go-to choice for teams building flexible, fast-moving products because it stores data as JSON-like documents instead of rigid rows and columns. Knowing how to speak its language isn't optional; it's the difference between an app that ships and one that stalls.
Create — Inserting Documents the Right Way
Inserting data sounds trivial until you're doing it wrong in production. MongoDB gives you two insertion methods: insertOne() for a single document and insertMany() for a batch. The key insight most tutorials skip is what MongoDB hands back after an insert — an acknowledgement object containing the auto-generated _id. That _id is a 12-byte ObjectId, globally unique by design, and you should be capturing it in your application logic rather than ignoring it.
Why does this matter? Because in a typical e-commerce flow you insert an order document, then immediately need that order's _id to create a shipment record that references it. Ignoring the return value forces a second round-trip to the database just to find what MongoDB already told you.
insertMany() is more nuanced. By default it's ordered, meaning if document number 3 of 10 fails validation, documents 1 and 2 are already committed and 4-10 are abandoned. Passing the option { ordered: false } lets MongoDB push through all valid documents and collect errors at the end — a much better pattern for bulk imports where one bad record shouldn't kill the whole batch.
// Connect to a local MongoDB instance using the official Node.js driver const { MongoClient, ObjectId } = require('mongodb'); async function insertOrders() { const client = new MongoClient('mongodb://localhost:27017'); try { await client.connect(); const db = client.db('storefront'); // target database const ordersCollection = db.collection('orders'); // target collection // --- insertOne: single document --- // MongoDB auto-generates _id if you don't provide one const singleInsertResult = await ordersCollection.insertOne({ customerId: 'cust_8821', items: [ { productSku: 'SHOE-RED-42', quantity: 1, unitPrice: 79.99 }, { productSku: 'LACE-BLK', quantity: 2, unitPrice: 3.50 } ], orderStatus: 'pending', createdAt: new Date() // always store dates as Date objects, not strings }); // insertedId is the ObjectId MongoDB assigned — save this, don't query for it again console.log('New order ID:', singleInsertResult.insertedId); // --- insertMany: bulk insert with ordered:false so one bad doc doesn't abort the rest --- const bulkOrders = [ { customerId: 'cust_1103', items: [{ productSku: 'HAT-GRN-M', quantity: 1, unitPrice: 24.00 }], orderStatus: 'pending', createdAt: new Date() }, { customerId: 'cust_4477', items: [{ productSku: 'BELT-BRN-L', quantity: 1, unitPrice: 34.50 }], orderStatus: 'pending', createdAt: new Date() } ]; const bulkInsertResult = await ordersCollection.insertMany(bulkOrders, { ordered: false // continue inserting even if one document fails validation }); // insertedCount tells you how many actually landed console.log('Orders inserted:', bulkInsertResult.insertedCount); console.log('Inserted IDs:', bulkInsertResult.insertedIds); } finally { await client.close(); // always close the connection } } insertOrders().catch(console.error);
Orders inserted: 2
Inserted IDs: { '0': 64f3a2b1c9e77f001a3d8e13, '1': 64f3a2b1c9e77f001a3d8e14 }
Read — Querying Documents Without Killing Your Database
Reading data is where most MongoDB performance problems are born. find() returns a cursor, not an array — meaning MongoDB streams results lazily rather than loading everything into memory at once. This is a feature, not a quirk, and it matters the moment your collection grows past a few thousand documents.
The filter argument is where the real power lives. MongoDB's query language is composable: you can filter by exact match, range ($gte, $lte), array membership ($in), logical operators ($and, $or), and even run regex searches — all within a single query object. But power without discipline is dangerous. Running find({}) on a million-document collection with no limit() is how you bring a production server to its knees.
Projection is the query-level equivalent of SELECT in SQL — it tells MongoDB which fields to return. Always use it. Fetching a 40-field customer document when your UI only needs name and email wastes bandwidth, serialization time, and memory on both sides of the wire.
Indexes are what make reads fast, but that's a separate topic. The habit to build now is: every field you filter or sort on should eventually have an index behind it. Use explain('executionStats') on any query you care about to see whether MongoDB is doing a full collection scan (bad) or an index scan (good).
const { MongoClient } = require('mongodb'); async function queryOrders() { const client = new MongoClient('mongodb://localhost:27017'); try { await client.connect(); const ordersCollection = client.db('storefront').collection('orders'); // --- findOne: grab the first match, returns a plain object (not a cursor) --- const latestPendingOrder = await ordersCollection.findOne( { orderStatus: 'pending' }, // filter: only pending orders { projection: { customerId: 1, items: 1, createdAt: 1, _id: 0 }, // only return these fields sort: { createdAt: -1 } // newest first } ); console.log('Latest pending order:', latestPendingOrder); // --- find with range filter: orders created in the last 7 days --- const sevenDaysAgo = new Date(Date.now() - 7 * 24 * 60 * 60 * 1000); const recentOrders = await ordersCollection .find( { createdAt: { $gte: sevenDaysAgo }, // $gte = greater than or equal to orderStatus: { $in: ['pending', 'processing'] } // $in matches any value in the array }, { projection: { customerId: 1, orderStatus: 1, createdAt: 1 } } ) .sort({ createdAt: -1 }) .limit(25) // ALWAYS limit open-ended queries in production .toArray(); // materialise the cursor into an array console.log(`Found ${recentOrders.length} recent orders`); recentOrders.forEach(order => { console.log(` ${order.customerId} — ${order.orderStatus} — ${order.createdAt.toISOString()}`); }); // --- countDocuments: how many total pending orders exist? --- // Use countDocuments() NOT count() — count() is deprecated and ignores filters in some edge cases const pendingTotal = await ordersCollection.countDocuments({ orderStatus: 'pending' }); console.log('Total pending orders:', pendingTotal); } finally { await client.close(); } } queryOrders().catch(console.error);
Found 3 recent orders
cust_4477 — pending — 2024-09-02T14:23:01.000Z
cust_1103 — pending — 2024-09-02T14:22:58.000Z
cust_8821 — pending — 2024-09-02T14:22:55.000Z
Total pending orders: 3
Update — Changing Data Without Replacing It
Updating is where MongoDB beginners most often shoot themselves. The critical rule: always use an update operator like $set, $inc, or $push. If you pass a plain document as the second argument to updateOne(), MongoDB treats it as a full replacement and wipes every field not in your update object. That's a legal operation, but it's almost never what you want.
MongoDB's update operators are surgical. $set modifies only the fields you name, leaving everything else untouched. $inc atomically increments a number — perfect for view counters or inventory tracking without a read-modify-write cycle. $push appends to an array, and $pull removes from one. $unset deletes a field entirely.
The upsert option is powerful but underused. Setting { upsert: true } tells MongoDB: if the filter matches something, update it; if nothing matches, create a new document. This collapses a common 'find-then-insert-or-update' pattern into a single atomic operation — no race conditions, no extra round-trips.
For bulk changes, updateMany() applies your update to every document matching the filter. Just make sure your filter is tight. Running updateMany({}, { $set: { archived: true } }) marks every single document in the collection as archived — no confirmation prompt, no undo.
const { MongoClient, ObjectId } = require('mongodb'); async function updateOrders() { const client = new MongoClient('mongodb://localhost:27017'); try { await client.connect(); const ordersCollection = client.db('storefront').collection('orders'); // --- updateOne with $set: change a single order's status --- // ALWAYS use $set — without it, your whole document gets replaced const targetOrderId = new ObjectId('64f3a2b1c9e77f001a3d8e12'); const statusUpdateResult = await ordersCollection.updateOne( { _id: targetOrderId }, // filter: match by exact _id { $set: { orderStatus: 'dispatched', dispatchedAt: new Date() // add a new field on the fly — no schema migration needed } } ); console.log('Documents matched:', statusUpdateResult.matchedCount); // how many matched the filter console.log('Documents modified:', statusUpdateResult.modifiedCount); // how many were actually changed // --- $inc: atomically decrement stock without a read-modify-write --- // This is safe under concurrent writes; plain read+write is NOT const inventoryCollection = client.db('storefront').collection('inventory'); await inventoryCollection.updateOne( { productSku: 'SHOE-RED-42' }, { $inc: { stockCount: -1 } } // subtract 1 from stockCount atomically ); // --- upsert: create a shipment record if it doesn't exist, update if it does --- const shipmentResult = await client.db('storefront').collection('shipments').updateOne( { orderId: targetOrderId }, // filter: does a shipment for this order exist? { $set: { orderId: targetOrderId, carrier: 'FastFreight', trackingNumber: 'FF-993821-XZ', estimatedDelivery: new Date('2024-09-05') } }, { upsert: true } // create it if it doesn't exist ); // upsertedId is non-null only when a new document was created if (shipmentResult.upsertedId) { console.log('Shipment record created with ID:', shipmentResult.upsertedId); } else { console.log('Existing shipment record updated'); } // --- updateMany: mark all orders older than 30 days as 'archived' --- const thirtyDaysAgo = new Date(Date.now() - 30 * 24 * 60 * 60 * 1000); const archiveResult = await ordersCollection.updateMany( { createdAt: { $lt: thirtyDaysAgo }, orderStatus: 'dispatched' }, // tight filter! { $set: { orderStatus: 'archived', archivedAt: new Date() } } ); console.log(`Archived ${archiveResult.modifiedCount} old orders`); } finally { await client.close(); } } updateOrders().catch(console.error);
Documents modified: 1
Shipment record created with ID: 64f3a2b1c9e77f001a3d9f01
Archived 0 old orders
Delete — Removing Data Safely and Intentionally
Deletion in MongoDB is permanent and instantaneous. There's no recycle bin, no soft-delete built in, and no ROLLBACK. This is why production teams almost universally implement soft-deletes — adding a deletedAt timestamp field and filtering it out of queries — rather than physically removing documents. Physical deletion is reserved for true cleanup jobs like purging GDPR-expired data or clearing test fixtures.
MongoDB gives you deleteOne() for surgical removal of a single document and deleteMany() for bulk removal. The same golden rule from updates applies: if your filter is too broad, you will delete more than you intended. Always test your filter with a find() call first — confirm the count and spot-check a few returned documents before converting it to a deleteMany().
findOneAndDelete() is the atomic 'grab it and kill it' operation. It deletes the document and returns it to your application in a single server-side operation. This is exactly what you need for job queue patterns where a worker claims a task — using separate find() then deleteOne() calls creates a race condition where two workers could claim the same job.
Never run deleteMany({}) in production without a filter. There's no faster way to have a very bad day.
const { MongoClient, ObjectId } = require('mongodb'); async function deleteOrders() { const client = new MongoClient('mongodb://localhost:27017'); try { await client.connect(); const db = client.db('storefront'); const ordersCollection = db.collection('orders'); // --- PATTERN 1: Soft delete (recommended for business data) --- // Don't physically remove — mark as deleted so you keep the audit trail const orderToSoftDelete = new ObjectId('64f3a2b1c9e77f001a3d8e13'); await ordersCollection.updateOne( { _id: orderToSoftDelete }, { $set: { isDeleted: true, deletedAt: new Date() // lets you audit WHEN it was deleted and query "deleted in last 30 days" } } ); console.log('Order soft-deleted (record preserved for audit)'); // --- PATTERN 2: Hard delete with deleteOne --- // Appropriate for test data, temp records, or GDPR erasure requests const tempOrderId = new ObjectId('64f3a2b1c9e77f001a3d8e14'); const hardDeleteResult = await ordersCollection.deleteOne({ _id: tempOrderId }); console.log('Hard-deleted document count:', hardDeleteResult.deletedCount); // deletedCount will be 0 if the _id didn't exist — not an error, just a miss // --- PATTERN 3: findOneAndDelete — atomic claim-and-remove for job queues --- const jobsCollection = db.collection('pendingEmailJobs'); // Grab the highest-priority job AND remove it in one atomic step // If two workers call this simultaneously, only one gets the document const claimedJob = await jobsCollection.findOneAndDelete( { status: 'queued' }, { sort: { priority: -1, queuedAt: 1 }, // highest priority, then FIFO returnDocument: 'before' // return the document as it was before deletion } ); if (claimedJob) { console.log('Worker claimed job:', claimedJob.jobId, '| Recipient:', claimedJob.recipientEmail); } else { console.log('No jobs in queue right now'); } // --- PATTERN 4: deleteMany for bulk cleanup (always preview first!) --- // Step 1: preview — what would get deleted? const toDeleteCount = await ordersCollection.countDocuments({ isDeleted: true, deletedAt: { $lt: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000) } // older than 90 days }); console.log(`About to permanently purge ${toDeleteCount} soft-deleted orders...`); // Step 2: execute only when you're sure const purgeResult = await ordersCollection.deleteMany({ isDeleted: true, deletedAt: { $lt: new Date(Date.now() - 90 * 24 * 60 * 60 * 1000) } }); console.log('Purged:', purgeResult.deletedCount, 'old orders'); } finally { await client.close(); } } deleteOrders().catch(console.error);
Hard-deleted document count: 1
No jobs in queue right now
About to permanently purge 0 soft-deleted orders...
Purged: 0 old orders
| Operation | Single Document Method | Multiple Documents Method | Atomic Grab-and-Act |
|---|---|---|---|
| Create | insertOne(doc) | insertMany([docs], {ordered:false}) | N/A |
| Read | findOne(filter, options) | find(filter).limit(n).toArray() | N/A |
| Update | updateOne(filter, {$set:{...}}) | updateMany(filter, {$set:{...}}) | findOneAndUpdate() |
| Delete | deleteOne(filter) | deleteMany(filter) | findOneAndDelete() |
| Upsert support | Yes — {upsert:true} option | Yes — {upsert:true} option | Yes — {upsert:true} option |
| Returns modified doc | No — returns result metadata | No — returns result metadata | Yes — returns document |
| Race-condition safe | Not inherently | Not inherently | Yes — single atomic op |
🎯 Key Takeaways
- Always use update operators like $set and $inc — passing a plain object as the update argument replaces the whole document silently, which is almost never what you want.
- find() returns a lazy Cursor, not an array — always chain .toArray() or use for await...of, and always add .limit() to open-ended queries before they hit production traffic.
- findOneAndDelete() and findOneAndUpdate() aren't just convenience methods — they're the only way to atomically claim-and-act on a document without race conditions between concurrent processes.
- Soft-deletes (adding isDeleted: true and deletedAt fields) preserve your audit trail and make GDPR compliance practical. Reserve physical deletion for temp data, test fixtures, and scheduled purge jobs.
⚠ Common Mistakes to Avoid
- ✕Mistake 1: Updating without $set — Writing updateOne({_id: id}, { status: 'active' }) instead of updateOne({_id: id}, { $set: { status: 'active' } }) silently replaces the entire document with just { status: 'active' }, deleting every other field. Fix: always wrap your changes in a $set, $inc, or other update operator.
- ✕Mistake 2: Comparing string IDs to ObjectId — Querying find({ _id: '64f3a2b1c9e77f001a3d8e12' }) always returns zero results because MongoDB stores _id as an ObjectId type, not a string. The types don't match so no document is found — no error, just silence. Fix: always wrap the ID string in new ObjectId('...') before querying.
- ✕Mistake 3: Using the deprecated count() instead of countDocuments() — count() without a filter can return stale metadata instead of an accurate live count, especially on sharded clusters. This leads to pagination bugs where the total count is wrong. Fix: always use countDocuments(filter) which always scans live data and respects your filter correctly.
Interview Questions on This Topic
- QWhat's the difference between updateOne() and replaceOne() in MongoDB, and when would you deliberately choose replaceOne()?
- QHow would you implement an atomic 'claim a task from a queue' operation in MongoDB without creating a race condition between two simultaneous workers?
- QIf countDocuments() and estimatedDocumentCount() both count documents, why do they exist separately — and which would you use on a 50-million-document collection for a real-time dashboard?
Frequently Asked Questions
What is the difference between insertOne and insertMany in MongoDB?
insertOne() adds a single document and returns a result with the new document's _id. insertMany() adds an array of documents in one network round-trip, which is dramatically faster for bulk loads. The key option to know is { ordered: false }, which tells MongoDB to continue inserting valid documents even if some fail, rather than aborting the entire batch on the first error.
Why does my MongoDB updateOne() query delete all my document fields?
You're passing a plain object as the second argument instead of using an update operator. updateOne({ _id: id }, { status: 'active' }) replaces the document entirely. You need updateOne({ _id: id }, { $set: { status: 'active' } }) — the $set operator tells MongoDB to modify only the named fields and leave everything else alone.
Does MongoDB have transactions like SQL databases?
Yes — since version 4.0, MongoDB supports multi-document ACID transactions. For single-document operations, MongoDB has always been atomic. Multi-document transactions are available on replica sets and sharded clusters, but they come with a performance cost. The best practice is to model your data so that most operations only need to touch one document, reserving transactions for the rare cases where you genuinely need cross-collection atomicity.
Written and reviewed by senior developers with real-world experience across enterprise, startup and open-source projects. Every article on TheCodeForge is written to be clear, accurate and genuinely useful — not just SEO filler.