Why is it said that the introduction of the gramophone record changed the history of audio recording?


          One of the most successful devices that initiated a technical development was the gramophone disc. Its invention is credited to Emile Berliner.



          By the end of World War I, discs became an important commercial recording format.



         In terms of speed of the discs, there was no universally accepted standard. For a few decades, the double-sided 78 rpm disc was the standard consumer music format.



          By 1948, the Columbia Records introduced the long-playing 33 1/3 rpm vinyl record, or ‘LP’. The advantage of vinyl discs were that they showed improved performance, and were long-lasting, although more expensive.



          By the 1970s, it paved way for the introduction of electrical recording, which was the most important milestone in the history of sound recording. 


Why is it said that the CDs revolutionized sound recording?

             A compact disc (CD) as some of you might know is a digital data storage format. Released in 1982, it was originally developed to store and play sound recordings alone. Only later was it adapted for storage of data. 



           With the advent of digital sound recordings and the CD, significant improvements were made in the durability of recordings. The CD, particularly, initiated a massive wave of change in the music industry by the mid-1990s.



            When introduced, a CD could store much more data than a personal computer hard drive, which would typically hold 10 MB.



            As technology improved, better versions appeared, like write-once audio and data storage (CD-R), rewritable media (CD-RW), Video Compact Disc (VCD), Super Video Compact Disc (SVCD), Photo CD, Picture CD, CD-i, and Enhanced Music CD.



              Although they are being replaced by other forms of digital storage, CDs still remain one of the primary distribution methods for the music industry. 


What is the role of the equalizer in sound recording?


          No matter how good a video looks, it cannot be complete without a perfect audio to accompany it.



         And supposing the sound track is weak or unimpressive, won’t it affect the video as such? Well, not really. There is a device that can help here-the equalizer.



         Technically, equalization means the process of adjusting the balance between frequency components within an electronic signal. In other words, it is equipment that pinpoints a range of audible frequencies, and adjusts the level of sounds that fall within the range. With this, one can minimize noise recorded in the field or boost the weak audio.



        There are different instances where an equalizer has to be used- in the recording studios, radio studios and production control rooms. It is also used for live sound reinforcement.



        Several types of equalizers are used for music production, depending on the needs. Two common ones are graphic and parametric equalizers.


What is meant by an isolation booth?


          An isolation booth is a standard room in a recording studio. Typically, it is soundproofed to keep out external sounds, and keep in the internal sounds. It is also designed in such a way as to have lesser amount of diffused reflections from walls, in order to make a good sounding room. 



          In a professional recording studio, there may be one or more small isolation booths, along with a control room and a large live room. All of them are usually soundproofed with double-layer walls and insulation in-between the two walls.



          The first soundproof booth is said to have been invented in 1876 by Thomas A. Watson. He developed it to be used for demonstrating the telephone with Alexander Graham Bell. However, he did not patent it.



          Today, there are variations of the same concept, like a portable standalone isolation booth, and a compact guitar speaker isolation cabinet. 


What is dubbing?


          In film making, dubbing refers to the process of adding dialogues and other sounds to the soundtrack of a motion picture that has already been shot.



          In other words, it is a post-production process through which a complete soundtrack is created. That is, after the sound editor prepares all necessary tracks like dialogues, effects, music etc., the dubbing mixer proceeds to balance all of these elements, and record the finished soundtrack. This process typically takes place on a dub stage. 





          There is one more context in which the term ‘dubbing’ is used. It is something familiar to most of us as the technique used to translate foreign-language films into the audience’s language. This process is referred to as ‘re-voicing’ outside the film industry.



          Here, the translation of the original dialogue is matched to the lip movements of the actors. In the past, dubbing was applied mainly to musicals, when the actor had an unsatisfactory singing voice. Today, it is done not just in traditional films, but also in video games and television. 


Why is it said that a soundtrack is a necessary part of a film?

            In its broadest sense, a soundtrack comprises everything you hear in a movie, like the sound effects, dialogues, music etc.



            The soundtrack of a film is created as part of its post-production process. At first, the dialogues, sound effects, and music are developed as separate tracks like the dialogue track, sound effects track and music track. They are then mixed together to make a ‘composite track’ which is what we hear in films. Hence its importance. In fact, the relation between films and music is such that a film remains incomplete without the other.



            Soundtracks, as we know, help to build the tone and mood of a film. At times when they are too good, the track remains as important as the film.



            One of the most quoted examples in this context is the soundtrack of the classic film ‘Titanic’. The song ‘My Heart Will Go On’ sung by Celine Dion successfully enhanced the story of the ill-fated lovers, and also gave a perfect aura to the film. Even today, not many of us can think of the film without the song lingering in our minds.





 


Why is it said that sound effects are important for a movie?


            Imagine watching a horror film without background sounds evoking a spooky or mysterious feeling, or watching an action thriller without the sound effects that ‘thrill’. How interesting would these be? Not very.



            Indeed, a good film should have a gripping story and impressive acting, but it won’t be complete without good sound effects.



            Sound effects, generally, refers to the artificially created or enhanced sounds. In other words, it is something that helps to tell a story better!



            There are different types of sound effects in a film. One of them is the hard sound effects, which are common sounds, such as door alarms, weapons firing etc. Then there are background sound effects. They do not explicitly synchronize with the picture, but indicate a setting to the audience. For example, forest sounds. Then, there is something called the Foley sound effects. They are sounds that synchronize on screen- like the footsteps, the movement of hand props etc.



            Another important effect is the design sound effect. They are used to suggest futuristic technology in a science fiction film. 


Why is it said that sound effects are useful not only in movies, but also in video games and music?


            You might have noticed that sound effects are used not just in films, but also in television shows, live performances, animation, video games, music, or other media too.



            As far as video games are concerned, the effects play an integral role in their success. To understand this better, one should play a game with sound turned off. Imagine scenes of battle with no sound. Boring, aren’t they? That is because sound effects play a major role in communicating the narrative, and enriching the experience of the player.



             In the past, game environments were simple and hence, they reduced the required number of sounds. As the business of video gaming grew, more people started getting involved in the sound part. Today, most successful projects have a specialized team of sound designers who work to improve the quality of games.



             However, such importance is given not just in video games, but in music too. Some pieces of music use sound effects to enhance its beauty. 


What is meant by the term sound-on-film?


            Sound-on-film is a class of sound film processes in which the sound that accompanies a picture is physically recorded onto a photographic film.



            It can either record an analogue soundtrack or digital soundtrack, and the signal may be recorded optically, or magnetically.



            The most prevalent method of recording analogue sound on a film print is by stereo variable-area (SVA) recording. The technique was first used in the mid-1970s.



            During the 1990s, three different digital soundtrack systems for 35 mm cinema release prints were introduced- the Dolby digital, stored between the perforations on the sound side, Sony Dynamic Digital Sound or SDDS, that stored in two redundant strips along the outside edges, and DTS, in which sound data is stored on separate compact discs.



            Since the three systems appear on different parts of the print, one movie can contain all of them, allowing broad distribution regardless of the sound system installed at individual theatres.



 


Why is it said that sync sounds are widely used in movies?


               Sync sound or synchronized sound recording means the recording of sound at the time of filming a movie.



               It is a technique used since the birth of sound movies. The first Indian talkie ‘Alam Ara’ released in 1931, used sync sounds. Following this trend, many other films too were shot using the same technique until the 1960s. 



 





               The 1960s saw the arrival of the practical, but noisy camera Arri 2C for outdoor shoots. After its introduction, sync sounds were not used, owing to the noisy environment it created. This in turn, popularized the technique of dubbing.



               However in recent times, many Indian films have adopted sync sounds. Some examples are films like Jodhaa Akbar, Lagaan, Dil Chahta Hai and Rock On.



               Although more effort needs to be put in, film-makers who go with sync sound say that the technique helps to breathe life into films. The challenge is to have a controlled environment while shooting.



 


Why is DTS a leader in digital sound?

            DTS is the short form of ‘Dedicated to Sound’. It is a series of multi-channel audio technologies owned by the American company DTS, Inc. The latter has specialized in digital surround sound formats used for commercial, theatrical, and consumer grade applications.



            Founded in 1993, DTS focuses on the development of surround sound audio technology like encoding, decoding, and processing works, used in both cinema and home theatre applications. It must be noted that DTS is not only a company name, but also a label it uses to identify its group of surround sound audio technologies.



            The 1993 megahit film ‘Jurassic Park’ by Steven Spielberg was the first theatrical movie that employed DTS audio surround sound technology. And, the first home theatre application of DTS audio was the release of Jurassic Park on Laser-disc in the year 1997.



            However, since its very start, DTS has played a very important role in the audio technology of films.



 


Why is the Dolby system important?


               Roughly speaking, Dolby Digital is the name of audio compression technologies developed by the Dolby Laboratories.



               To put it clearly, it is an advanced form of digital audio coding that enables storing and transmitting of high-quality digital sound, far more efficiently than was previously possible.



               The technology was originally named Dolby Stereo Digital. In the initial stage, it was used to provide digital sound in cinemas from 35mm film prints. Today, it is used for other applications too, such as TV broadcast, radio broadcast via satellite, DVDs, and game consoles.



               The first film to use Dolby Digital technology was ‘Batman Returns’ when it premiered in 1992. Since then, the technology has constantly proved to be a step forward in sonic realism and listener involvement. It has been able to change the way we experience films in theatres, and, is now doing the same for video programming at homes too. 


Why is it said that SDDS is an important digital sound format?


            Sony Dynamic Digital Sound or SDDS is a theatrical cinema sound system developed by the Japanese multinational company Sony.



            In this format, digital sound information is recorded on both outer edges of 35mm film release prints. Altogether, the system supports as many as eight independent channels of sound, including five front channels, two surround channels, and a single sub-bass channel. The front channels are particularly useful in large cinema auditoriums where the angular distance between centre and left/right channels may be considerable.



            Originally, the SDDS project was premiered on June 17, 1993, with the Arnold Schwarzenegger film Last Action Hero. Since then, around 1500 movies have been mixed in the format, and, over 6,750 movie theatres were equipped with SDDS by 1999.



            However, compared to the other two digital sound formats- Dolby Digital and DTS, SDDS is less popular. As of now, SDDS is not available on any home entertainment format. 


Why is noise a nuisance?


          Every day we hear different kinds of sounds. Some are sweet and appealing, while some are highly disturbing.



          Noise refers to such unwanted sounds that are loud and unpleasant to hear. Technically, they cannot be distinguished from sound, as they are both vibrations through a medium like water or air. The difference rather happens inside the brain, which receives and perceives a sound.



          Commonly, noise is discussed in terms of decibels (dB), which is also the measure of loudness or intensity of a sound. There are different kinds of noise around us today, and one of it is environmental noise. Roughly speaking, this is the accumulation of all noise present in a specified environment.



          The sources of this noise are many - e.g. motor vehicles, aircrafts, trains and industrial sources.



          Besides, there are also factors like poor urban planning, construction works, industrial activities, and loud musical performances etc. that contribute to creating noise.



          Together, these factors regularly expose millions of people to noise pollution. 


What is meant by the term noise pollution?

            Noise pollution or sound pollution refers to the excessive and troublesome sound that affects all living things on Earth. In the case of people, it causes both physical and mental effects.          



            There are many reasons attributed to this kind of pollution. The predominant ones are industrial activities and social activities.



            There are many people who like firecrackers. These firecrackers, when exploded during celebrations, and festive occasions produce huge noise. This is one common example for noise pollution.



            Another factor is vehicles. Many studies have found out that the main source of noise pollution in huge cities is the sound from different modes of transportation like, cars, buses, trains, trams, airplanes etc.



            In addition, the unrestricted use of loudspeakers and microphones too disturbs people and other living beings badly.