Data, Data everywhere, but where to store it all – Sify


The world is producing so much data that there is a real risk of running out of storage if current trends continue, writes Satyen K. Bordoloi as he outlines solutions


Bend the wrist, adjust the screen for an imperfectly cute frame, say ‘oooo‘ (don’t smile) and press the red button on the screen to click on a selfie, parts of which will float through cyberspace and be saved in the ‘cloud’. You didn’t even think about it before doing this. But you just added your 3 megabytes (MB) to over 2.5 quintillion bytes of data the world is generating today.

Data may be the new oil, but so much is being generated every day that there is a serious risk of running out of space to store it. Don’t you think so? Look at these crazy statistics.

At the beginning of 2020, the total amount of data in the world was estimated at 44 zettabytes (ZB). A single zettabyte is 1000 bytes (or roughly 1 kilobyte) raised to the seventh power. If you were to write it down, a zettabyte would have 21 zeros, which is roughly the equivalent of 660 billion Blu-ray discs, 33 million human brains, or 330 million of the world’s largest hard drives.

Two Seagate Barracuda hard drives from 2003 and 2009 with 160 GB and 1 TB of storage respectively. In 2022, Seagate will offer capacities up to 20 TB (Picture: Wikipedia)

Then came the pandemic, and in 2020 itself, the world created or replicated 64.2 ZB of data, according to a report by International Data Corporation. The report stated, “Data creation and replication will grow faster than installed storage capacity.” We could run out of hard disk drives (HDD) for storage.

The saving grace for 2020 was that only “2% of this new data was stored and retained until 2021 – the rest was either ephemeral (mainly created or replicated for consumption) or temporarily cached and then overwritten with newer data.

The world already has hundreds of miles of data farms like this one run by Google, storing all of our data (Picture: Google data centers)

However, as data volume globally doubles every two years, it is projected to reach 175ZB by 2025 – with 90ZB of that coming from Internet of Things (IoT) devices and growing at a CAGR of 23%, matching the CAGR growth of 19 .2% surpassing global storage capacity, we realize we have a big problem.

Also Read :  Nanowires are Helping to Shape Future Quantum Devices

FLASH FACTS

The use of flash memory with higher speeds is increasing compared to HDD (hard disk drives). However, hard drives last longer, are more reliable, and are cheaper. Therefore, solutions are being worked on to increase HDD speed and capacity using Shingled Magnetic Recording (SMR) to increase areal density by ensuring read and write tracks overlap.

But it has its limitations, so manufacturers are introducing new technologies, including HAMR (heat-assisted magnetic recording), MAMR (microwave-assisted magnetic recording), OptiNAND – expanding hard disk controllers with a small flash drive to store metadata – and NVMe (non-volatile memory Express).

This has already increased capacity, and Western Digital and Seagate have come out with 20TB hard drives for cloud storage, and Toshiba has an 18TB hard drive, all using one or a mix of these technologies.

State-of-the-art hard disk drive areal densities from 1956 to 2009 compared to Moore’s Law. By 2016, progress had slowed well below the extrapolated density trend (Picture: Wikipedia)

Even larger capacity hard drives are on the way, and other approaches are being tried, including two-dimensional magnetic recording (TDMR) to achieve a 20 percent increase in density, energy-assisted magnetic recording (EAMR), heated spot magnetic recording (HDMR) – a proposed improvement of HAMR.

Also Read :  Are third-party Apple Watch Ultra bands any good?

In 2020, Showa Denko (SDK), a Japanese disk manufacturer, announced the development of a new type of hard disk media using HAMR, which is expected to have capacities of up to 80TB. It uses thin films of Fe-PT magnetic alloy – one of the strongest magnetic materials with high corrosion resistance, alongside a new magnetic stratification structure and temperature control to create a new product with higher magnetic coercivity than any existing media claimed release.

The other approach is to use software to detect empty spaces, maximizing the use of available memory and thus improving capacity. Storage virtualization is when software identifies free storage from multiple physical devices and reallocates it in a virtualized environment to make the best use of the available capacity. Effective use of storage increases capacity without having to purchase new storage. Think of it like rearranging a packed wardrobe to find there’s room for a few more.

radical ideas

Since the dawn of the computing age, newer materials have been discovered to store more data in a smaller space. In 1998 there was talk of a new oxide material that would allow the use of a phenomenon called colossal magnetoresistance to improve storage capacity.

In 2017, scientists at the Department of Electrical and Computer Engineering at the National University of Singapore announced they had combined cobalt and palladium into a film that could host stable skyrmions for storing and processing data.

In 2015, researchers at the US Naval Research Laboratory (NRL) gave graphene magnetic properties and thought they had developed a way to make graphene suitable for data storage at a million times the capacity of today’s HDDs. However, last year another research on a similar technology using graphs as data storage claimed that it could store 10 times more data than today’s traditional storage.

Also Read :  Web3 Decentralized Storage Company W3 Storage Lab Changes Name to Fog Works

As can be seen from the graph example above, there is a lot of slippage between the cup and the lip. Anything is possible in the lab, but when you bring it to the real world, reality strikes. The other problem is funding. A new idea, with no funding to turn it into real applications, will remain just that, as has been proven countless times over the past 200 years of scientific research. Take the case of fax machines or lithium-ion batteries, which were first invented in 1843 and 1912, respectively.

The greatest promise lies in rapid advances in quantum computing, which can use physical quantum mechanical processes such as superposition and entanglement to create not only processing devices, but also a way to potentially store a vast amount of data as qubits — quantum bits. This is mainly in the theoretical phase, although both private companies like Google and nations like China are making huge strides in this direction. If and when it happens, quantum computing and storage will transform the world’s entire computing ecosystem, specifically how and how much data we can store. But on a practical level, these devices could be decades away from actual practical use.

A hard drive going into a data center stack (Picture: Wikipedia)

According to researcher Melvin M. Vopson, at the rate we are producing data today – 1021 digital bits per year and assuming a 20% growth rate – after 350 years the data we are producing will exceed the number of atoms on Earth .

How much of the global datasphere is real time

Within 300 years the energy required to sustain this production would be more than what the entire planet is consuming today, and in 500 years the weight of digital content will be half the mass of the planet.

This is an information catastrophe.

Data interactions per connected person per day

So next time you click on a selfie, think for a moment if you want to look at the photo a year later. If not, delete it. It will do global data storage capacities and the planet a great favor if each of us reduces our data storage load, even a little. Data sustainability is a fringe term today, but it won’t be long before it becomes as serious as climate change.

In case you missed it:



Source link