database – Best way to structure history table with large amount of data

Question:

functional requirement

I own devices . Each device, roughly speaking, has its unique identifier, an IP address, and a type.

I have a ping routine for every device that has an ip address. This routine is nothing more than a C# executable, which runs every 3 minutes and tries to ping the device's IP address.

The ping result I need to keep in the database, as well as the verification date (regardless of the ping result).

Technical part:

Assuming my ping process and bank structuring is ready as of 06/01/2016, I need to do two things:

  • Daily extraction (which is a fed table)
  • Real-time extraction (last 24 hours)

Both must return the same thing to me:

  • Devices that have been unavailable for more than 24 hours.
  • Devices that have been unavailable for more than 7 days.

It is understood as unavailable the device that, when pinged, did not respond to the ping .

It is understood as available the device that, when pinged, answered the ping successfully.

What I have today and it works very badly:

A historico_disponibilidade table, with the following structure:

create table historico_disponibilidade (id_dispositivo number, respondeu number, data date);

This table has a large amount of data (today it has 60 million, but the trend is always to grow exponentially)

The questions:

  • How to achieve such goals without running into queries slow problems?
  • How to create a table(s) structure that is prepared to receive millions/billions of records within my corporate universe?

Answer:

The problem doesn't seem to be exponential or polynomial. There are some who are, there wouldn't be much to do. If there is an exponential or polynomial implementation, you have to solve this, but the question does not make it clear how it is being done, it seems to me that this is not the case.

The volume of rows in a table should have very little effect on the performance of queries , as long as you have the necessary indexes and the queries themselves don't do too weird things.

Even if it does, the solution would be to erase the data you no longer need. If you need to keep data only for historical purposes and no longer need to keep queries on old data (at least not often) the solution is to have an auxiliary table that would work as an archive. There would be a transfer from the normal table to the archive table. Apparently only need to keep the last 7 days. I don't think it goes without saying that this transfer can be automated. It is possible to use data ranges to partition tables , but I don't know how this works in Oracle.

All good databases on the market, especially Oracle, are prepared to work with billions of rows without problems. If you have a specific problem in a query and you're not finding a solution (probably with an index) then it's worth posting a specific question.

Note that the table is so simple that there is nothing to simplify. Unless there are things hidden in the question.

I wonder if the whole routine is wrong. It seems weird to me to do what's being done, I think it's the right solution to the wrong problem, but I won't go into it.

Scroll to Top