You made it through the end of the world. Given enough time and care, you may find enough cooperative survivors to begin rebuilding. But how can you replace the knowledge that was lost when everything fell apart?
You can’t, not all of it. But you should think now about grabbing some of the essential files you may need at such a time. Call them your “End of the World Library.”
Here are some free sources for many (thousands) of those files:
http://www.themodernsurvivalist.com/archives/2471 (roughly 200)
http://patriotrising.com/survival-pdf-files-manuals-guides/ (almost 2,000)
http://www.survivorlibrary.com/?page_id=1014 (thousands of PDF books in wide range of topics, dating to early 20th century and earlier)
Looking over these link lists of PDF files should give you some ideas. The digital books above will help you survive to meet others, and help your group to survive and organise to grow into something that can begin rebuilding a civilisation.
You will want more than reference and “how-to” books, of course. A civilisation will need more than the technology and know-how to survive. But you have to start somewhere.
Here are some reasons why it will be impossible to save everything:
Almost all humans are gone, and with them their knowledge and tricks of the trades. Museums and libraries are destroyed or suffering inexorable decay. Data centres will lose all backup power and shut down, never to restart.
How Much Information is There?
Modern human societies are continually generating information at a prodigious rate.
The view from the 1990s:
- Cinema. There were 4,615 films made world-wide in 1989; at 5MB/sec and 7200 seconds average, that would be 166 terabytes.
- Images. There are about 52 billion (thousand million) photographs taken each year in the world. [Mills 1996]. If each of those is a 10 KB JPG, that is 520,000 terabytes, or 520 petabytes, and these are actually all different. Again, less than 1% represent professionally taken or reviewed pictures, probably less than 0.1%. By comparison even the NASA earth observing project, expected to accumulate 11,000 terabytes, [Fargion 1996]. doesn’t affect the numbers.
- Broadcasting. In the US, we have 1593 television stations. If each sends out 5 MB/sec for 30 million seconds per year, that is over 200 petabytes. However, one might expect that only about 1/10 of the programming is actually different for different stations; that is 20 petabytes of distinct programming, and extrapolated to the world would be 80 petabytes. Radio, by contrast, is insignificant; the US has 6,956 radio stations and if each sends out 30 million seconds per year at 8 KB/sec we would have only 1.7 TB in the United States.
- Sound. Sales of recorded music in the US in 1992 were 407 million CDs and 336 million cassettes (and 20 million vinyl disks, still). Assuming 550 MB for each CD and cassette that would be 400 petabytes, much duplicated of course. If the number of different recordings for sale is about 30,000 this would be 15 terabytes in the US and 60 terabytes world-wide.
- Telephony The largest storage requirement would come from converting all telephone conversations to digital form. In the US in 1994 there were 500 billion call-minutes of `interlata toll’ and there is about 20 times as much local calling, so at 56 kbits/sec this would be 4,000 petabytes of digitized voice. The only thing I am not considering is consumer videotape, on the grounds that much of it is used to record off-the-air TV and duplicates the TV stations.
The view from the 2000’s
• In 2007, humankind successfully sent 1.9 zettabytes of information through broadcast technology such as televisions and GPS. That’s equivalent to every person in the world reading 174 newspapers every day.
• On two-way communications technology, such as cell phones, humankind shared 65 exabytes of information through telecommunications in 2007, the equivalent of every person in the world communicating the contents of six newspapers every day.
• In 2007, all the general-purpose computers in the world computed 6.4 x 10^18 instructions per second, in the same general order of magnitude as the number of nerve impulses executed by a single human brain. Doing these instructions by hand would take 2,200 times the period since the Big Bang.
• From 1986 to 2007, the period of time examined in the study, worldwide computing capacity grew 58 percent a year, 10 times faster than the United States’ gross domestic product.
• Telecommunications grew 28 percent annually and storage capacity grew 23 percent a year. __ https://news.usc.edu/29360/How-Much-Information-Is-There-in-the-World/
And the information flow keeps growing.
As of 2014 Google has indexed 200 Terabytes (TB) of data. To put that into perspective 1 TB is equivalent to 1024 Gigabytes (GB). However, Google’s 200 TB is just an estimated 0.004 percent of the total Internet. Perhaps even more impressive is the fact that 16 years of video is uploaded to YouTube every day.
The “invisible” deep web is thousands of times larger than the web that Google typically returns in searches. Here are some ways you might try to mine the deep web.
Even if you can’t save everything, the sooner you get started saving the things you see as essential, the better you will get at that game.
Information storage is becoming denser, lighter, smaller, easier to pack and carry. But the human brain is staying about the same. This means that any society advanced enough to develop into an abundant and expansive human future, will need to have a lot of human brains — the better quality the brains, the better prospects for that society’s future.
Humans do not want to hand over their futures to machines. But we will gladly use machines in any way that will help us to reach the kind of future we want.