From architecture to streaming: How technology shaped music songwriting
The first of hopefully many long-reads you will see from me this year
Strap in, folks. Long-read today. I hope you like it.
As technology evolves, it changes the tools for music’s creation, dissemination and consumption. These in turn end up shaping the songwriting itself, often a result of second-order effects. I find this utterly fascinating. It demonstrates how music - all art, really - is a product of its time and context. Put another way, if Pink Floyd were to start today, they would sound extremely different, though the creative spirit might be the same.
That the medium dictates the message is hardly a new insight. Why, most of us have come to accept news and video entertainment evolving with the times - but somehow the art of music seems more… Pure? At least to traditionalists who lament the “death of the album” or make the “music was better in my day” argument (coughrickbeatocough).
This makes me even more interested in this topic. What some may consider a musical ideal-body-type was most likely a result of technological limitations or commercial compulsions of the era. Put another way, that some artists saw a distribution mechanism like a long-playing record and chose to weave storytelling into it, speaks more of their own creativity rather than some inherent property of the medium. Again, if the Beatles were starting their careers in 2025… you get the drift.
Anyway, you’ve probably figured why this is a pet topic of mine. I also realise this article is starting to resemble a song from the early 80s - a gloriously meandering intro before finally getting to the point - so let’s cut to the chorus.
This article features some examples of how music has been shaped by the technology du jour. My focus is largely on the songwriting - not marketing or public perception.
Smaller venues allowed classical music to get increasingly complex
Let’s take an OG form of technology - architecture. Its relationship with music is fascinating, which you can peruse in this excellent long-read. To oversimplify, the smaller a venue got, the more detail a musician could employ in their music, allowing for more creativity and experimentation.
A good place, literally, to start would be Gothic cathedrals. They were one of the first public spaces where 'music’ could be heard, though more for religious than entertainment reasons. Their vast, open spaces were conducive to Gregorian chants which relied more on sustained tones rather than rhythm or harmony. This reverberation created the ethereal resonance that the space was built for. Sample a few seconds of this, for instance.
A large church hall seems the right home for that, right?
As music became more of a past-time and moved to smaller venues specific to entertainment, composers started experimenting. After all, their artistic choices would now become more apparent to audiences. By the 17th century, Johann Sebastian Bach would pioneer the use of counterpoint (combining multiple independent melodies) to great effect. The first few seconds of this video will explain the concept (no musical knowledge needed).
This technique would have gotten lost in a Gregorian hall that tended to blend notes together. Along the way, Bach became the first great composer many of us might know. A century later, royal patronage allowed Wolfgang Mozart to get even more intricate with his compositions, which made full use of the intimacy of royal courts and salons. His ‘conversational’ style of composition could be perfected only in those smaller venues.
While this essay is about technology, we can’t ignore the role that changing cultural mores play in shaping music. For example, as musical performances got more ‘elite’, audiences were expected to maintain silence. This approach - as classist as it was - had one positive outcome: composers could experiment with dynamic range - that is, utilising volume as an instrument. This allowed for dramatic contrasts between soft and loud passages, a technique that became central to Romantic-era music, with names such as Chopin and Schubert… And perhaps decades later inspiring metal bands like Opeth that do the whole light-to-dark thing well. Here’s a nice video showing dynamism in classical music, from a piece aptly titled Romantic.
As musicians became public figures and grew influential, they could command spaces optimised to their kind of music. Richard Wagner would build halls such as Bayreuth’s Festival Opera House that were suited for large orchestras and the pomp his compositions demanded. An example of how the music dictated the venue rather than the other way around!

It’s not just Western Classical. Back home, Carnatic and Hindustani music have a deep association with temples and other venues. Madurai’s Meenakshi Temple features a hall of musical pillars and would have been an awe-inspiring venue for religious singing. Over in Hampi, we have the SaReGaMa pillars, and one assumes the halls were built so vocals and ancient instruments could reverberate.
It wasn’t just classical music, of course. As popular music moved to new venues, it adapted and gave us entirely new genres. In the 20th century, dance halls encouraged artists to extend their songs to keep the patrons jiving - leading to the improvisation that would characterise jazz. This context would dictate choice of instrument, too - the trumpet was more a practical than stylistic choice in early jazz settings as its high frequencies could cut across talkative audiences. Even smaller venues allowed more clarity in vocals (and engagement with the audience), giving us everything from the intimacy of singer-songwriters to the energy of punk rock… With their emphasis on lyrics and messaging. Meanwhile, “arena rock” acts like U2 wrote simple, singalong, mid-tempo ballads with an eye on replicating them live in stadiums - preferred venues for touring but not the best for sonic detail. There’s an excellent TED talk on this topic by David Byrne, the vocalist of rock band Talking Heads.
Architecture is frozen music, indeed.
Related: Talkies and theatres led to cinema-specific music
Music is such an integral part of movies - especially for us in India - that it’s hard to imagine that there was a time it wasn’t. Indeed, the first cinema didn’t have sound at all! But when it did, innovations came in thick and fast. Fittingly, the movie that signaled the end of the ‘silent era’ was 1927’s The Jazz Singer. It was the first movie with both synchronized recorded music and lip-synced singing and speech.
As film and going to cinemas became an integral part of life in the subsequent decades, advancements came in thick and fast. Walt Disney - ever the proponent of delivering an immersive experience - got his engineers to develop Fantasound, a sound reproduction technique specific to 1940’s Fantasia, the first movie with stereo. It became one of the first surround sound techniques. Subsequently, CinemaScope, Dolby Stereo and others brought high-quality, multi-fidelity sound to the cinema hall.
Musicians could compose scores keeping these advancements and the context of the film in mind. Bernard Herrmann - Alfred Hitchcock’s longtime collaborator - would be an early pioneer. He brought in an unusual mix of instruments as and when needed for a scene, rather than hire a full band or orchestra for the whole film. From Wikipedia:
His use of nine harps in Beneath the 12-Mile Reef created an extraordinary underwater-like sonic landscape; his use of four alto flutes in Citizen Kane contributed to the unsettling quality of the opening, only matched by the use of 12 flutes in his unused Torn Curtain score; and his use of the serpent in White Witch Doctor is possibly the first use of that instrument in a film score. Herrmann's involvement with electronic musical instruments dates back to 1951, when he used the theremin in The Day the Earth Stood Still.
In the 70s, John Williams took full advantage of Dolby stereo, bringing in grand orchestration for much of the Star Wars movies. Hans Zimmer and our own AR Rahman took advantage of musical developments outside the cinema hall - synthesizers and digital recording techniques.
Imagine how this would have sounded in a theater!
Multi-track recording made recordings more adventurous
In the early days, music had to be recorded live in a single take. Which means the whole band played together around a few mics, and a single mistake by any one meant an entire re-take. To solve this rather taxing problem, engineers developed the technique of recording parts separately and combining them - aka multi-track recording. This gave musicians greater balance over instrument volumes too, since it could now be adjusted in the edit - so a drummer didn’t necessarily need to hold back so as to not overpower everyone else.
This practical tool soon became an instrument by itself and musicians started innovating to take advantage of it. A singer could record her vocal track multiple times and 'overlay' it, creating an ethereal choral effect - not unlike what resonant Gothic architecture did to chants. Similarly, guitars could sound larger than life, or just be made to harmonise - even if the band had just a single guitar player. The Beatles and their producer George Martin merit a special mention here. Their decision to stop touring in the mid 60s freed them up to compose songs without worrying about how to replicate it on stage. The second half of their catalog is a testament to using the studio as an instrument as much as their own songwriting skills. A great example is Tomorrow Never Knows, which ends what I think is the best Beatles record, Revolver. The song is said to be the first true ‘electronic’ song with its dazzling use of effects. This was made possible through John Lennon’s creativity, an excellent dealer, and of course… Multi-track.
I like how the video itself kinda pays tribute to the recording studio, something I realised only when searching for a link to embed here!
It's hard to imagine iconic albums like Pink Floyd’s Dark Side of the Moon or The Beatles’ own Sgt Pepper’s Lonely Hearts Club Band without multi-track. It allowed individual musicians to experiment - guitarists could record their solos, slow it down, or do all kinds of things to get interesting effects. Jimmy Page of Led Zeppelin did experiments like this, but once again for the best example we turn to the Fab Four. In I’m Only Sleeping, George Harrison had a guitar solo written, played it, then listened to the backwards recording and learnt to play it, recorded that, and the reverse of that was used in the final mix. The end was a delightfully languid, lazy effect that went delightfully with the theme of the song and John Lennon’s somnolent vocal delivery.
Listen from 1:28 here (ideally start from the top so you get the vibe)
To be clear - these are not mere sonic improvements like capturing and reproducing more information, like smartphones adding more pixels every year. It's about opening the doors to completely new kinds of music itself. Early on, multi-track recording would birth psychedelic music and in later years, hip-hop and electronic.
BTW, one man who had a major role to play in the invention of multi-track was Les Paul. Yes, the same man who had a role to play in the invention of the electric guitar.
The Walkman leading to musical diversification… probably
The personal audio revolution started with the release of the Sony Walkman in 1979.
The immediate effects were obvious - people could carry their music around. ‘Freedom’ was a big theme of the product’s marketing and that should have resonated with the spirit of the 80s. From a Smithsonian article:
People used the Walkman to help manage their moods and calm stress; dentists would plop Walkman headphones on a patient before drilling. Andy Warhol tuned out the din of Manhattan.
There were other interesting effects as well - mixtape culture took off, a precursor to today’s playlists. The ubiquity of people on headphones today (and how we as a culture have come to accept that) traces itself back to the Walkman. Of course, a lot of today’s negatives can be traced back to the Walkman too. Some say “electronic narcissism” started with the Walkman and Der Speigel disparagingly said, “A technology for a generation with nothing left to say.”
I personally have another take - that the Walkman was a catalyst for genre splintering. If people can listen to whatever they like without worrying about what the world will think - this could be parents and judgemental collegemates - then experiment they will. I can’t find definitive research on this but many articles say “explore genres” when talking about the Walkman’s legacy. And the late 80s / early 90s did give us many new subgenres. While there were probably a lot of contributing factors, including organic musical evolution, I reckon the Walkman and the personal audio revolution it ushered in, had a not too small part to play.
One definitive example I can find is City Pop, a happy, modern, utopian-sounding genre from Japan that peaked in popularity in the 80s. It featured synthesizers meant to take advantage of the Walkman headphones’ ability to showcase detail. Oh, and self-help audio books became a thing too.
Speaking of the Walkman what it would usher in…
Whispered vocals for a headphoned world
As music grew to become a professional industry, it became vital to take into account how it was likely to be consumed - that would impact the recording and mixing process. Classic rock sounds the way it does because it was meant to be consumed communally via speakers in dorms or living rooms. Louder vocals, punchy guitars, highly rhythmic. Think Van Halen, Queen, Bon Jovi, even a lot of R&B and Soul.
Since the 2000s though, most music listening happens on headphones and earbuds (I am disregarding the philistines who use laptop / phone speakers). This has impacted the song production and writing. A lot of modern pop - think Lana Del Rey, Billie Eilish, Selena Gomez - is 'whispered', ASMR-like. This, doubled with lyrics that have tended to focus on individualism or confessional, make for an experience that could literally and metaphorically be described as heady. Here’s my favourite music YouTuber Mic The Snare talking about this, from 9:19 (the whole video is a great watch).
The headphonification of music consumption also means producers pack in a lot more bass to be audible over the tiny speakers (the same song hence sounds quite boomy when in a club setting). This video talks about how mixing engineers come up with all kinds of techniques to replicate the feeling of bass through personal audio.
The rise of headphone-specific settings like Apple's Spatial Audio points to how music production increasingly takes personal audio into account. Even before this, some albums were recorded in a way to provide a stereophonic effect - an example being on Pearl Jam's Binaural in 2000.
From a really good article on this:
Producers are increasingly mixing music for smaller speakers and the relatively low sound quality that comes from streaming music. Jeff Ellis, producer of Frank Ocean’s Channel Orange, makes a point of testing how songs sound on smartphone speakers and headphones because he knows this is how most people listen to music now.
Another fabulous video by Mic The Snare talks about while pop music may have gotten melodically simpler, it probably has got texturally more interesting. For example, Finneas used a traffic signal sound to develop a sample for Billie Eilish. Again, if people are listening on headphones which can perceive interesting detail like that, then songwriters (good ones anyway) will adapt.
Related: Boomy hip-hop for a car stereo world
For a long time, automobiles and the role they played in (mostly American) lives dictated music, such as radio-friendly rock of the 70s. But things would take a boomier turn in the 80s with the rise of hip-hop. Folks wanted to show off their systems and make a statement about the music they were listening to… And a low-end thump from the car moving through neighbourhoods was a way to do it.
Enter - the war for the bassiest bass. There’s a fabulous episode of this on the sound-related podcast 20000 Hz, which you can check out here. That episode description should tell you all you need to know about this connection:
In the late '80s and early '90s, a seismic subculture shook the streets… literally. “Boom Cars,” decked out with custom sound systems, roamed neighborhoods blasting the bassiest music ever recorded. But where did this movement come from, and why did it fade away? In this episode, we dive into the world of Miami Bass, dB Drag Racing, and the infamous tapes that could shred your subwoofers.
Here’s the song that would apparently tear your speakers out. Mine are still intact.
Once again, context matters. It’s not like musicians couldn’t have beefed up the bass before the age of car stereos, but there was no practical or artistic need for them to.
Also related - the loudness wars aka why some RHCP albums sound bad
In a bid to stand out on radio, some producers would deliberately make the mix ‘loud’. The problem was listening fatigue as well as a loss of musical quality. Dynamic range (the contrast between the softer and heavier parts) was sacrificed. Or as one article put it as far back as 2007: It doesn’t allow the music to breathe.
Several albums in the 2000s have been lambasted for this practice - but the Red Hot Chili Peppers’ monumental hit Californication seems to be the poster boy. To be clear - I like the band and I think the songwriting on the album is excellent. It’s like looking at an excellent scenery through bad glasses, if that makes sense. But hey, the album did sell close to 20 million records, so maybe us audiophiles should can it?
Things got so bad that there were open letters, public outcry, and even a non-profit called Turn Me Up! which certified that the album bearing its sticker had good dynamic range. There was even a Dynamic Range Day, which is the most gloriously geeky audiophile thing I have seen.
The Roland TR-808 and all it did for hip-hop
In this entire list, I’m not being very brand-specific - as all electric guitars changed rock, as all mics changed singing. But one specific device changed a lot of modern music - the Roland TR-808. It was an electronic drum machine to replicate the sounds of percussion, to make it easier to produce / write music - obviating the need to get samples or have an actual bulky expensive drum kit around.
Like many other items in this list that were meant to solve a functional problem, artistic choices emerged. The TR-808, initially a commercial failure, blew up in its afterlife in the underground. It revolutionised hip-hop, and subsequently pop and electronic music. Incidentally, one of the first mega hits it would spawn was not a hip-hop track, it was Marvin Gaye’s Sexual Healing.
But it is still remembered today as an integral part of shaping hip-hop. Run DMC, Public Enemy, the Beastie Boys, Outkast were users. In 2009, Kanye West named an album - 808s and Heartbreak - after the instrument, which was used prominently in its production.
Roland has even made a playlist curating several top tracks made with the TR-808.
And because there was a common ‘code’, collabs happened. From The Verge:
The 808 broke down the walls between genres, and spawned collaborations between some of the biggest acts from different spaces. Because the 808 was so adaptable, it was like the first open-sourced sound, with artists building on each other’s interpretations and making it their own. Lil Jon and Usher’s “Yeah” was an unlikely collaboration that showcased an R&B singer on an 808 and made Usher instantly relevant again. Marvin Gaye’s “Sexual Healing” is nowhere near a hip-hop or techno record, yet it relied entirely on the 808. The 808 is like the not-so-secret sauce of hit records — sprinkle in an 808 drum, and your song instantly sounds better.
The instrument changed the way musicians think about composing tracks. It’s been called hip-hop’s equivalent of the Fender Stratocaster. Speaking of which….
New instruments = new emotions
While the TR-808 mandated its own point, let’s look at some other examples across time periods.
In the classical era, pianoforte (a kind of piano with softer hammers) allowed for more expressive tones. The microphone - invented just about a century ago - changed the art of singing. It freed singers from having to bellow or shout to be audible over instrumentalists, and made even whispers loud enough. Crooners such as Frank Sinatra and Chet Baker took advantage of this - bringing in new forms of expression and lyricism (it’s hard to sing a song about tender caring love if you can only shout). It’s hard to think of a Johnny Cash, Billie Eilish or Kishore Kumar without microphones. Here’s the 'king of croon’, Bing Crosby with a song that pretty much showcases what the style is about:
Amplification in general led to new playing styles and bringing traditional instruments to the stage and popular imagination. Sona Jobarteh popularised the Kora, a West African instrument often accompanying traditional storytelling. Makoto Taiko is a group that plays the traditional Japanese taiko drum. There are rock bands like Pierce Brothers and Like a Storm from the antipodes that use the didgeridoo. And of course, Ravi Shankar’s electrified sitar would end up inspiring The Beatles.
If we expand instruments to mean specific effects - and you know where I am going with this - Cher’s Believe ushered in the popular usage of the most controversial musical technology in recent years - auto-tune. The technology actually had its roots in oil exploration, and its inventor Andy Hildebrand realised what he had invented could be used to correct off-pitch vocals. Again, some artists used it as a stylistic choice (though many say it gets overdone or masks bad singing)
But no instrument change exemplifies this point more than the electric guitar. Originally invented just to make the erstwhile acoustic version more audible, it revolutionised popular music. By adding effects, guitarists were able to crank out all kinds of sounds, birthing genres from funk, psychedelic rock and heavy metal. Some of these effects were discovered accidentally - distortion first came about thanks to a ruptured amplifier - and some were deliberately engineered. Guitarists would come up with playing techniques specific to the electric guitar, such as power chords, whammy bar usage, and the deliberate use of shrieking feedback. Legends like Jimi Hendrix and Eddie van Halen wrote up a new sonic vocabulary for this instrument. The electric guitar - now a symbol of rock and perhaps all of music - is a perfect example of how technology came in to solve one problem, but eventually ended up leading to a whole new style of playing.
You just wanna see 1:51 onwards.
And finally… Streaming
A surefire way to cause chaos with more than two opinionated music fans is to ask how streaming has changed music. Even the regular person who doesn’t overthink about these things will have an opinion or two. And honestly, this could be an essay by itself.
But since our emphasis here is on songwriting (rather than, say, how artists are not being paid enough), here are a few things I can think of:
Songs are getting shorter, choruses are coming in earlier. Gone are the long intros of the 80s, now musicians have to do anything to avoid the dreaded ‘skip’. Spotify pays if a stream crosses 30 seconds so naturally someone released a 100-song album filled with 30 second tracks. Why bother with more? I feel this was more a commentary rather than an artistic choice but what a perfect example for this article.
And while data always has, to some extent, dictated the music being made, the digital era gives Big Music a surfeit of it. It’s likely that major pop songs today are pretty much engineered based on past streaming data. I’m not saying this is a good or bad thing - incentives always drive mass art, and some folks always take it to an extreme. Purists can console themselves with the fact that the internet era has also given rise to platforms like Bandcamp and more diverse musical choices than before.
Streaming has impacted a lot else too - the marketing, the resurgence of the music video, globalisation of genres, and more… But we’ll explore that in another piece.
So there you go. A few examples of how technology has shaped music over the years. My emphasis here was on just the songwriting - not distribution or economics. At the same time, I am also leaving out many other factors that impacted songwriting - such as changing culture. Of course, these aspects are important, and I’ll write about them later.
Even with the reduced scope, I might have missed out on several things in this essay itself. I don’t intend to be comprehensive in this piece - I would love to get more examples and points of view so I can keep building up my own knowledge.
Hey, so why did you write this piece?
One of the first pieces I’m doing for The First Quarter is a look at how music has been shaped by the internet in the 21st century. There are several lens I’m looking at - ranging from economic to the music itself, and I am finding myself going down several rabbitholes. After all, this whole project is really just an excuse to do just that.
The practical upshot of this, dear subscriber, is that you might find listicles, facts, theses and trains of thought being sent your way. Think of them as sub-essays, working towards a larger piece. Some of these sub-essays might end up being fairly long themselves - this article for example is ~3500 words!
Of course, they will be fully formed themselves, as this piece hopefully was, but they’re really building blocks. Needless to say, they will all be tied back to why you signed up for this newsletter - making sense of the internet, technology, marketing (maybe!) - in some way.
Thank you for reading! This was terrific for me to put together and I am excited to see where this series of essays will go. This article was the outcome of me researching just one aspect of how the interent has changed our lives, and I wanted to hunt for examples of how tech has impacted songwriting over the years. The larger piece I am building towards - hopefully that will come out in March - is how popular music has changed in the 21st century.
If you have an opinion on the topic, send it my way :)
Chuck