For me this past 24 hours was a time to experience some large numbers, and this experience might well be stretched for a longer time.
It started from something only a little special: I was requested to import a “small” number of data records from one data table into another, and this small number was 8 million. I told myself “this is OK. 8 million is small, if they insist”.
The data records were inserted into the destiny table in about 2 hours — everything seemed still under control.
And then the screen seemed frozen at a line of command: “About to enable constraints…”, after the “data insert” part done. Having waited for another 3 hours, I decided to take a look and find out what had been going on.
The Enterprise Manager showed that the task was still undergoing with almost no disturbance, with no server resources contention detected either. The thing caught my eyes, however, was what shown steadily there as “active operation” — table-sacn. My first reaction was “oh… this must be a large table”, as all DBAs know that table-scan is the least efficient operation on large tables. Now the question was how large the table.
I used “select count(*) from …” to check it out, still kind of hoping the table not too big. But for 10 minutes nothing was returned from this command. Istarted to realize that the table must be humongous. Using the statistics, the row-count turned out to be close to 8 billion!
At this time I understood why they had said 8 million a small number: 1/1000 of the base number.
But is it really necessary to store so much data in a single data table (only one of the tens tables in one of the hundreds databases)? It is like every Chinese has 5~6 records in this table, if we take 1.4 billion as the Chinese population, the largest in the world.
However, the operation is still going on, and I am not sure how much longer it will run, another 3, 5 or 10 hours?