java – MongoDB – insert brake after 5000000 records


Hello everyone. In general, the problem is this: A JAVA project is NoSQL DB – MongoDB, it is necessary to work with approximately 1000000000 records. Inserting into an empty table, 5,000,000 records, takes 13-15 minutes, but after inserting 5,000,000, the further insertion process starts to slow down and almost exponentially and the RAM begins to devour unmeasured. The priority task of this database is search (search speed per 10000000 is satisfactory)


  1. Why is this happening?
  2. How to fix it?

Possible solutions:

  1. Every 5000000 records – create a new table?

  2. Index optimization? (I search by id)

  3. MongoDB config optimization?

  4. System optimization?

  5. Database replacement?

Thanks in advance for your reply!


I had a similar case when I was working with SQLite . I had to switch to MySQL .

But for a billion records, in my opinion, you will need to switch to BigTable already. For example, on Hadoop , HBase or Cassandra .

As for Cassandra , to be honest, I'm not sure, because Facebook itself switched from Cassandra to HBase , although they themselves developed it.

Scroll to Top