There was a new demand for the application of the company where I work, I need to store more than 100k of records in different tables, I'm worried if SQLite will handle this demand, my question is:
- Does SQLite have a storage limit?
- Does the data type influence the boundary storage?
The line limit is 2 to the 64, meaning there is no practical limit.
There is a 128TB limit on the total database byte size, which in practice is difficult to achieve. Remembering that SQLite works with the entire database in a single file.
Of course, this limit depends on the SQLite configuration, whether in its compilation, or by some
PRAGMA , so it is possible to change.
Almost all database limits are configurable and the relevant ones are available in the documentation .
Page size can positively or negatively affect database performance. In general, the best results are obtained between 8K and 32K, which would limit the size a little more if you want maximum performance (but it changes little and depends on the scenario).
SQLite's biggest scalability problem is when it needs a very large amount of concurrent scripts. As its locking has no granularity, any attempt to write prevents other writes even in points that are not in concurrency. Does not affect readability unless you are not using WAL, which is recommended in almost all situations.
The data type does not influence the limit, only the size occupied. Of course each type has its own limit.