A robot chef that learns from videos

You might not often think about it that way, but cooking is a difficult skill with a number of factors in play. Just ask a robot! While human beings can learn to cook through observation, the same cannot be done easily by a robot. Programming a robot that can make a variety of dishes is not only costly, but also time-consuming.

A group of researchers from the University of Cambridge have programmed their robotic chef with a cookbook - eight simple salad recipes. The robot was not only able to identify which recipe was being prepared after watching a video of a human demonstrating it, but was also then able to make it. The results were reported in the journal ‘IEEE Access.’

Simple salads

For this study, the researchers started off by devising eight simple salad recipes and then made videos of themselves making these. A publicly available neural network programmed to identify a range of different objects was then used to train the robot chef.

The robot watched 16 videos and was able to recognise the correct recipe 93% of the time (15 times out of 16), even though it detected only 83% of the actions of the human chef in the video. The robot was able to recognise that slight variations (portions or human error) were just that, and not a new recipe. It even recognised the demonstration of a new, ninth salad, added it to its cookbook and made it.

Hold it up for them

The researchers were amazed at the amount of nuance that the robot could grasp. For the robot to identify, the demonstrators had to hold up the fruit or vegetable so that the robot could see the whole fruit or vegetable, before it was chopped.

These videos, however, were nowhere like the food videos with fast cuts and visual effects that trend on social media. While these are too hard for a robot to follow at the moment, researchers believe that robot chefs will get better and faster at identifying ingredients in videos like those with time, thereby becoming capable of learning a range of recipes quickly.

Picture Credit : Google 

Powerful launcher called Titan IIIC

On June 18, 1965, the expendable launch system Titan IIIC flew for the first time. Used by the U.S. Air Force and NASA from 1965 to 1982, Titan IIIC was a powerful launcher.

Do you know what an expendable launch system is? These are launch vehicles that can be launched only once. This means that the components are either destroyed during re-entry or are discarded in space after launch.

Also called expendable launch vehicles (ELVS), such systems usually contain several rocket stages. As the vehicle gains altitude and speed, these stages typically are sequentially discarded as and when their fuel is exhausted.

The Titan IIIC was one such ELV. Used majorly by the U.S. Air Force and also by NASA, the rocket consisted of modified liquid-fuel first and second stages with two lateral strap-on solid rockets to enhance boost at lift-off.

Began as an ICBM

The Titan family of launch vehicles started off as a large intercontinental ballistic missile (ICBM) as the U.S. Air Force sought an ICBM that would surpass Atlas in terms of delivery capacity and sophistication. Just like the Atlas and Thor, Titan too evolved into an important family of space launch vehicles.

The development contract for what would become the Titan ICBM was issued in October 1955. It was named Titan as the name referred to any of the children of Uranus (Heaven) and Gaea (Earth) and their descendants in Greek mythology. The first Titan was test-launched on February 6, 1959, but Titan I wasn't modified for spaceflight.

Modified for Gemini Project

 That first happened with Titan Il, a more powerful version of Titan I. Tested successfully in March 1962, Titan II was declared operational in 1963. Initially modified as the Gemini-Titan II to be the launch vehicle of the crewed Gemini Project, it was then used to place satellites in orbit as well.

When there was a need for rockets that were capable of carrying heavier payloads than those handled by Atlas-Centaur, the Titan III family of launch vehicles were born. The Titan IIIA was a Titan II ICBM with an added third stage called transtage, which used twin Aerojet engines and burned Aerozine 50 and nitrogen tetroxide liquid fuel.

Two strap-on boosters

Titan IIIC was an upgrade on Titan IIIA. The most important modification was the addition of two huge strap-on solid rocket boosters that were over 25m tall and 3m wide. They were capable of remarkable thrust as they were powered by burning aluminum/ammonium perchlorate solid fuel.

On June 18, 1965, the Titan IIIC was launched for the first time from Cape Canaveral, Florida with a payload of nearly 10,000 kg. From 1965 to 1982, the Air Force employed different Titan IIICs over 30 times successfully, placing a variety of military communications and reconnaissance satellites in orbit.

In all, there were only five complete or partial launch failures with Titan IIICs. It was also used successfully by NASA for a number of launches, including in 1973 to launch an Applications Technology Satellite.

As long as it was in use, the Titan IIIC was the most powerful launcher that was used by the Air Force. It remained that way until 1982, when Titan 34D, which was based on Titan IIIC, was introduced. The last flight of a Titan IIIC took place on March 6, 1982.

Picture Credit : Google 

What is the History of science fiction?

Science fiction (sci-fi) has taken us on incredible journeys through time and space, allowing us to explore the depths of our imagination and the limits of the universe.

The term science fiction was first used by William Wilson in 1851 in a book of poetry titled ‘A Little Earnest Book Upon a Great Old subject’. However, the term's modern usage is credited to Hugo Gernsback, who founded the first sci-fi magazine, ‘Amazing Stories’ in 1926. The American editor used this term to describe stories that combined scientific speculation with adventure and futuristic concepts. The term gained widespread use in the 1930s and 1940s and has since become a popular genre of literature and entertainment.

Generally, the beginning of the literary genre of sci-fi is traced to 19th Century England and the Industrial Revolution, a time when rapid technological change inspired and led to the popularisation of stories and narratives that were ideally set in the future and explored themes such as time travel and interplanetary voyages. These stories dealt with the limits of human knowledge and the unintended consequences of our technological prowess. However, literary scholars claim that the earliest literary work that could fit into the genre of sci-fi dates back to the second Century AD.

A True Story: The earliest surviving work of sci-fi

Written by a Syrian satirist Lucian, ‘A True Story’, (also known as ‘True History’) is a two-book parodic adventure story and a travelogue about outer space exploration, extraterrestrial lifeforms, and interplanetary warfare. It is just extraordinary to know that the author produced a story that so accurately incorporated multiple hallmarks of what we generally associate with modern sci-fi, centuries before the invention of instruments such as the telescope.

Lucian was from Samosata (present-day Turkey), and his first language is believed to be Aramaic but he wrote in Greek. He might not be a household name today but literary scholars call him one of antiquity's most brilliant satirists and inventive wits. He is famous throughout European history for producing his absurd yet fantastical works and for his overt dispelling of the ridiculous and ill-logical social conventions and superstitions of his time. His works have been an inspiration for literary classics such as Jonathan Swift's ‘Gulliver's Travels’ and Thomas ‘More's Utopia’.

The basic classification of sci-fi

Sci-fi can be broadly classified into two categories: soft sci-fi and hard sci-fi.

Soft sci-fi, also known as social sci-fi, emphasises the social and humanistic aspects of science and technology, often exploring the effects of scientific advances on society and individuals. Examples of soft sci-fi include Margaret Atwood's The Handmaid's Tale which explores the social and political consequences of a future where women's rights have been severely restricted. Hard sci-fi, also known as scientific or realistic sci-fi, places a greater emphasis on scientific accuracy and realism, often using established scientific principles and theories to explore the possibilities of the future. An example of this is Andy Weir’s ‘The Martian’, which narrates the story of an astronaut stranded on Mars and his efforts to survive by using his scientific knowledge and problem-solving skills.

Picture Credit : Google 

What is the History of science fiction?

Science fiction (sci-fi) has taken us on incredible journeys through time and space, allowing us to explore the depths of our imagination and the limits of the universe.

The term science fiction was first used by William Wilson in 1851 in a book of poetry titled ‘A Little Earnest Book Upon a Great Old subject’. However, the term's modern usage is credited to Hugo Gernsback, who founded the first sci-fi magazine, ‘Amazing Stories’ in 1926. The American editor used this term to describe stories that combined scientific speculation with adventure and futuristic concepts. The term gained widespread use in the 1930s and 1940s and has since become a popular genre of literature and entertainment.

Generally, the beginning of the literary genre of sci-fi is traced to 19th Century England and the Industrial Revolution, a time when rapid technological change inspired and led to the popularisation of stories and narratives that were ideally set in the future and explored themes such as time travel and interplanetary voyages. These stories dealt with the limits of human knowledge and the unintended consequences of our technological prowess. However, literary scholars claim that the earliest literary work that could fit into the genre of sci-fi dates back to the second Century AD.

A True Story: The earliest surviving work of sci-fi

Written by a Syrian satirist Lucian, ‘A True Story’, (also known as ‘True History’) is a two-book parodic adventure story and a travelogue about outer space exploration, extraterrestrial lifeforms, and interplanetary warfare. It is just extraordinary to know that the author produced a story that so accurately incorporated multiple hallmarks of what we generally associate with modern sci-fi, centuries before the invention of instruments such as the telescope.

Lucian was from Samosata (present-day Turkey), and his first language is believed to be Aramaic but he wrote in Greek. He might not be a household name today but literary scholars call him one of antiquity's most brilliant satirists and inventive wits. He is famous throughout European history for producing his absurd yet fantastical works and for his overt dispelling of the ridiculous and ill-logical social conventions and superstitions of his time. His works have been an inspiration for literary classics such as Jonathan Swift's ‘Gulliver's Travels’ and Thomas ‘More's Utopia’.

The basic classification of sci-fi

Sci-fi can be broadly classified into two categories: soft sci-fi and hard sci-fi.

Soft sci-fi, also known as social sci-fi, emphasises the social and humanistic aspects of science and technology, often exploring the effects of scientific advances on society and individuals. Examples of soft sci-fi include Margaret Atwood's The Handmaid's Tale which explores the social and political consequences of a future where women's rights have been severely restricted. Hard sci-fi, also known as scientific or realistic sci-fi, places a greater emphasis on scientific accuracy and realism, often using established scientific principles and theories to explore the possibilities of the future. An example of this is Andy Weir’s ‘The Martian’, which narrates the story of an astronaut stranded on Mars and his efforts to survive by using his scientific knowledge and problem-solving skills.

Picture Credit : Google 

What is the role of ISRO in space technology?

The ISRO works to develop and apply space technology in various sectors of our economy.

The Indian Space Research Organisation (ISRO) and the Indian Navy continue to conduct important trials for the Gaganyaan mission. However, do you know what ISRO is?

Organisation

 The ISRO is India's space agency that was established on August 15, 1969.

Previously known as the Indian National Committee for Space Research (INCOSPAR), it was envisioned by Vikram Sarabhai, who helped develop nuclear power in India and is considered one of the founding fathers of Indian space programme. ISRO is a major constituent of the Department of Space (DOS), Government of India.

The department executes the Indian Space Programme primarily through various centres or units within the ISRO.

Works

The ISRO works to develop and apply space technology in various sectors of our economy. It has established major space systems for communication, television broadcasting, and meteorological services.

ISRO's first satellite, Aryabhata, was launched by the Soviet Union on April 19, 1975. Meanwhile, Rohini, the first satellite to be placed in orbit by an Indian-made launch vehicle, was launched on July 18, 1980. It has developed satellite launch vehicles, PSLV (Polar Satellite Launch Vehicle) and GSLV (Geosynchronous Satellite Launch Vehicle), to place the satellites in the required orbits.

These rockets have launched communications satellites and Earth-observation satellites as well as missions to the Moon and Mars - Chandrayaan-1, 2008; Chandrayaan-2, 2019; and Mars Orbiter Mission (MOM), also called Mangalyaan, 2013.

ISRO has launched several space systems, including the Indian National Satellite (INSAT) system for telecommunication, television broadcasting, meteorology, and disaster warning and the Indian Remote Sensing (IRS) satellites for resource monitoring and management. The first INSAT and IRS satellites were launched in 1988.

While ISRO's headquarters is in Bengaluru, the launch vehicles are built at the Vikram Sarabhai Space Centre (VSSC), Thiruvananthapuram. Launches take place at the Satish Dhawan Space Centre on Sriharikota Island, near Chennai.

ISRO's chief executive is a chairman, who is also chairman of the Indian government's Space Commission and the secretary of the Department of Space. Its current Chairman is S. Somnath.

Picture Credit : Google 

A test flight with a number of firsts

The 1960s were a rather exciting time if you were part of NASA. After U.S. President John F. Kennedy stated his goal of landing humans on the moon and returning them safely home before the end of the decade of the 1960s, work at NASA progressed at breakneck speed given the enormity of the task ahead of them.

There were a lot of successes along the way, and setbacks too that proved to be equally important in terms of the overall learning. The Apollo-Saturn (AS) 201 mission in the mid 1960s was one such test flight that had a number of firsts, but also experienced malfunctions.

"All-up" philosophy

Coming at the height of Project Gemini, the AS-201 served as a crucial milestone in our march towards the moon. It used the "all-up" philosophy, according to which all components of a system were tested in a single first flight.

A suborbital test flight, its goals included demonstrating the Saturn IB's capabilities, the operation of Apollo Service Module's (SM) main engine, and determining the effectiveness of the Command Module's (CM) heat shield. The Saturn IB rocket, which was built on the 10 successful launches of Saturn 1 rocket, was the most powerful rocket up to that time.

Construction of the AS-201 spacecraft began in 1963 at the North American Aviation (NAA) plant in California. Assembly for the mission began in 1965 with the Saturn IB first stage arriving at the Cape Kennedy Air Force Station (CKAFS), now the Cape Canaveral Space Force Station, on August 14.

Extensively tested

The CM and SM of the spacecraft arrived within two days of each other in October. After successful mating of the two modules and extensive testing, they were trucked to the launch pad and stacked on top of the rocket by December. By January 1966, the final pieces were in place, and the rocket and spacecraft were declared ready for its mission after a flight readiness review and a countdown demonstration.

On February 26, 1966, the AS-201 mission lifted off after a number of launch delays. With flight director Glynn S. Lunney at the helm, a team of engineers kept an eye on all aspects of the mission.

Both stages of the Saturn IB rocket performed well and the Apollo Command and Service Module (CSM) was placed in its suborbital trajectory, with a peak altitude of 488 km. A camera mounted inside the first stage was later recovered at sea, and it had captured some key moments, including the fiery stage separation.

Helium ingestion in propellant lines, however, resulted in lower thrust than predicted during the first burn and the same problem also affected a second burn to test the engine's restart capability. The Service Propulsion System engine also underperformed, meaning the CM entered the atmosphere at a velocity slower than that planned.

Additionally, the capsule rolled during reentry as an electrical fault in the CM led to a loss of steering. The heat shield performed its duties without any flaws despite all these setbacks and the spacecraft splashed down in the Atlantic Ocean, 75 km from the intended target.

On museum display

The largely successful 37-minute test flight travelled 8,472 km overall. The CM was retrieved by swimmers from the prime recovery ship and it was then sent to the NAA plant for postflight inspections. After using it for land impact tests, NASA donated the capsule, which is now on loan and is displayed at the Strategic Air Command and Aerospace Museum.

The Saturn IB is now largely forgotten as its efforts pale in comparison with the Saturn V rocket, one of the largest and most powerful rockets built and which successfully sent people to the moon. But the Saturn IB rocket and the AS-201 mission were all part of the small stepping stones that made the giant leap possible.

Picture Credit : Google 

A test flight with a number of firsts

The 1960s were a rather exciting time if you were part of NASA. After U.S. President John F. Kennedy stated his goal of landing humans on the moon and returning them safely home before the end of the decade of the 1960s, work at NASA progressed at breakneck speed given the enormity of the task ahead of them.

There were a lot of successes along the way, and setbacks too that proved to be equally important in terms of the overall learning. The Apollo-Saturn (AS) 201 mission in the mid 1960s was one such test flight that had a number of firsts, but also experienced malfunctions.

"All-up" philosophy

Coming at the height of Project Gemini, the AS-201 served as a crucial milestone in our march towards the moon. It used the "all-up" philosophy, according to which all components of a system were tested in a single first flight.

A suborbital test flight, its goals included demonstrating the Saturn IB's capabilities, the operation of Apollo Service Module's (SM) main engine, and determining the effectiveness of the Command Module's (CM) heat shield. The Saturn IB rocket, which was built on the 10 successful launches of Saturn 1 rocket, was the most powerful rocket up to that time.

Construction of the AS-201 spacecraft began in 1963 at the North American Aviation (NAA) plant in California. Assembly for the mission began in 1965 with the Saturn IB first stage arriving at the Cape Kennedy Air Force Station (CKAFS), now the Cape Canaveral Space Force Station, on August 14.

Extensively tested

The CM and SM of the spacecraft arrived within two days of each other in October. After successful mating of the two modules and extensive testing, they were trucked to the launch pad and stacked on top of the rocket by December. By January 1966, the final pieces were in place, and the rocket and spacecraft were declared ready for its mission after a flight readiness review and a countdown demonstration.

On February 26, 1966, the AS-201 mission lifted off after a number of launch delays. With flight director Glynn S. Lunney at the helm, a team of engineers kept an eye on all aspects of the mission.

Both stages of the Saturn IB rocket performed well and the Apollo Command and Service Module (CSM) was placed in its suborbital trajectory, with a peak altitude of 488 km. A camera mounted inside the first stage was later recovered at sea, and it had captured some key moments, including the fiery stage separation.

Helium ingestion in propellant lines, however, resulted in lower thrust than predicted during the first burn and the same problem also affected a second burn to test the engine's restart capability. The Service Propulsion System engine also underperformed, meaning the CM entered the atmosphere at a velocity slower than that planned.

Additionally, the capsule rolled during reentry as an electrical fault in the CM led to a loss of steering. The heat shield performed its duties without any flaws despite all these setbacks and the spacecraft splashed down in the Atlantic Ocean, 75 km from the intended target.

On museum display

The largely successful 37-minute test flight travelled 8,472 km overall. The CM was retrieved by swimmers from the prime recovery ship and it was then sent to the NAA plant for postflight inspections. After using it for land impact tests, NASA donated the capsule, which is now on loan and is displayed at the Strategic Air Command and Aerospace Museum.

The Saturn IB is now largely forgotten as its efforts pale in comparison with the Saturn V rocket, one of the largest and most powerful rockets built and which successfully sent people to the moon. But the Saturn IB rocket and the AS-201 mission were all part of the small stepping stones that made the giant leap possible.

Picture Credit : Google 

Stories behind inventions

Who set up the world's first website? When was it? Any idea how large the first commercial microwave oven was? Did you know two inventors, working independently, came up with near-identical integrated circuits at about the same time? Who were they? Read on to find out the answers and the backstories of a few other inventions

Connecting the world

In 1969, the Internet took its first baby steps as Arpanet, a network created by the United States Defense Advanced Research Projects Agency (DARPA). It connected universities and research centres, but its use was restricted to a few million people.

Then in the 1990s, the technology made a quantum jump. Tim Berners-Lee, an English software consultant wrote a program called 'Enquire’, named after 'Enquire Within Upon Everything', a Victorian-age encyclopaedia he had used as a child. He was working for CERN in Switzerland at the time and wanted to organise all his work so that others could access it easily through their computers. He developed a language coding system called HTML or HyperText Markup Language, a location unique to every web page called URL (Universal Resource Locator) and a set of protocols or rules (HTTP or Hyper Text Transfer Protocols) that allowed these pages to be linked together on the Internet. Berners-Lee is credited with setting up the world's first website in 1991.

 

Berners-Lee did not earn any money from his inventions. However, others such as Marc Andreessen, who co-founded Netscape in 1994, became one of the Web's first millionaires.

It began with a bar of chocolate!

The discovery that microwaves could cook food super quickly was purely accidental. In 1945, American physicist Percy Spencer was testing a magnetron tube engineered to produce very short radio waves for radar systems, when the chocolate bar in his pocket melted. Puzzled that he hadn't felt the heat, Spencer placed popcorn kernel near the tube, and in no time, the popcorn began crackling. His company Raytheon developed this idea further and in 1947, the first commercial microwave oven was introduced - all of 1.5 metres high and weighing 340 kg!

Since it was too expensive to mass-produce, Raytheon went back to the drawing board and in the 1950s, came out with a microwave the size of a small refrigerator. A few years later came the first regular-sized oven-far cheaper and smaller than the previous models.

Chip-sized marvel

A microchip, often called a "chip" or an integrated circuit (IC), is what makes modern computers more compact and faster. Rarely larger than 5 cm in size and manufactured from a semi-conducting material, a chip contains intricate electronic circuits.

Two separate inventors, working independently, invented near-identical integrated circuits at about the same time! In the late 1950s, both American engineer Jack Kilby (Texas Instruments) and research engineer Robert Noyce (Fairchild Semiconductor Corporation) were working on the same problem- how to pack in the maximum electrical components in minimal space. It occurred to them that all parts of a circuit, not just the transistor, could be made on a single chip of silicon, making it smaller and much easier to produce.

In 1959, both the engineers applied for patents, and instead of battling it out, decided to cooperate to improve chip technology. In 1961, Fairchild Semiconductor Corporation launched the first commercially available integrated circuit. This IC had barely five components and was the size of a small finger. All computers began using chips, and chips also helped create the first electronic portable calculators. Today an IC, smaller than a coin, can hold millions of transistors!

Keeping pace with the heart

Pacemakers send out electrical signals to the heart to regulate erratic heartbeats. Powered by electricity, early pacemakers were as big as televisions, with a single wire or 'lead' being implanted in the patient's heart. A patient could move only as far as the wire would let them and electricity breakdowns were a major cause of worry!

In 1958, a Swedish surgeon and an engineer came together to invent the first battery-powered external pacemaker. Around the same time, American electrical engineer Wilson Greatbatch was creating a machine to record heartbeats. Quite by accident, he realised that by making some changes, he was getting a steady electric pulse from the small device. After two years of research, Greatbatch unveiled the world's first successful implantable pacemaker that could surgically be inserted under the skin of the patient's chest.

Picture Credit : Google 

Who received India's first Nobel Prize for physics?

Sir Chandrasekhara Venkata Raman was an Indian physicist known for his work in the field of light scattering. CV Raman was India's first physicist to win a Nobel Physics Prize in 1930 “for his work on the scattering of light and for the discovery of the effect named after him".

Nobel Prize-winning Sir CV. Raman is known for his pioneering work in Physics. India celebrates National Science Day on February 28 each year to mark the discovery of the Raman Effect on the day in 1928.

Sir Chandrasekhara Venkata Raman, also known as C.V. Raman, was a pioneering physicist. Born on November 7, 1888, he was a precocious child, who excelled in Physics during his student days at Presidency College, and later, at the University of Madras. He is best known for his discovery of the Raman Effect, which is a phenomenon of scattering of light that occurs when light passes through a transparent medium. This discovery revolutionised the field of spectroscopy and earned him the Nobel Prize in Physics in 1930.

Raman was born in Tiruchirapalli in Tamil Nadu. He showed an early aptitude for mathematics and science. He graduated from Presidency College in Madras with a degree in Physics and went on to work at the Indian Finance Service. However, he soon realised that his true passion was in Physics and left his job to pursue a career in research at the Indian Association for the Cultivation of Science. It was here that he was given an opportunity to mentor research scholars from several universities, including the University of Calcutta.

He was appointed as Director (first Indian) of the Indian Institute of Science, Bangalore, in 1933. In 1947, he was appointed the first National Professor of independent India. He retired from the Indian Institute in 1948. About a year later, he established the Raman Research Institute in Bangalore.

Raman was not only a brilliant scientist, but also a visionary. He believed that science should be accessible to all people, regardless of their background or social status. He was instrumental in the founding of several science institutions. His aim was to encourage the study of science in India.

In addition to the Nobel Prize, Raman received many other honours and awards throughout his career. He was elected a Fellow of the Royal Society in London in 1924 and was conferred the knighthood by the British government in 1929. He also received numerous awards and honours from the Indian government, including the Bharat Ratna in 1954. India celebrates National Science Day on February 28 each year to mark the discovery of the Raman Effect on the day in 1928.

Raman passed away on November 21, 1970, at the age of 82. He is remembered as one of India's greatest scientists and is still widely celebrated as a pioneer in the field of physics. His legacy continues to inspire young scientists and researchers, who continue to build on his work to expand our understanding of the world around us.

Picture Credit : Google 

What is saturation diving?

Let's find why it is one of the most dangerous jobs on Earth.

It is a method of deep sea diving in which the divers spend long periods of time in a special pressurised saturation chamber, which may be installed on a ship, ocean platform or under water. Saturation divers work at extreme depths of 200-300 metres or more below sea level (in contrast, scuba divers dive to a depth of 50 metres)

 A saturation diver lives in the saturation chamber for the duration of his project. Up to 12 divers live in close proximity in the chamber. In the chamber, the pressure is built up to match the pressure that the diver has to face in his working environment on the seabed The pressure at a depth of 300 metres under water is about 440 psi (pounds per square inch).

As the surrounding pressure on the diver’s body increases, he breathes in a mixture of oxygen and other inert gases like helium. The longer he stays in that pressurised environment, the more the inert gases dissolve in his body and saturate the divers blood and tissues. A diver can withstand the increased environmental pressure for a prolonged time.

To get to the sea floor, the diver exits the pressure chamber habitat through an airlock and enters a diving bell, which is lowered into the sea. After the diver completes his task, he gets back to the surface in the diving bell and re-enters the saturation chamber. Saturation divers usually have a work period of about 28 days after which, they have to compulsorily take a month off before going back to work.

Before the diver leaves the saturation chamber at the end of his work period, he has to undergo a long decompression to avoid 'the bends. technically known as decompression sickness. It is a condition which affects divers who ascend too quickly to the surface, leading to the harmful formation of gas bubbles in the blood and tissues. Saturation diving limits the number of decompressions, thereby significantly reducing the chances of decompression sickness.

Saturation divers are mainly used by the oil and gas industry or for research. They install pipelines, fix heavy machinery, build underwater structures, etc. on the seabed. Saturation diving is one of the most dangerous jobs on earth.

Picture Credit : Google