Home » Sound and concentration: “Functional music brings certain cognitive states”

Sound and concentration: “Functional music brings certain cognitive states”

by admin
Sound and concentration: “Functional music brings certain cognitive states”

The Berlin start-up Endel makes functional music – i.e. sounds with which people should relax, work or fall asleep better. The technology behind it uses AI – and in the future also in connection with well-known music catalogs that come from labels like Universal Music. In an interview with MIT Technology Review, company boss Oleg Stavitsky explains what’s behind it.

Mr. Stavitsky, when you go to your website or see your programs in the App Store, it says you deliver “personalized soundscapes” to customers. What is the business model behind it?

Oleg Stavitsky: We have developed a proprietary, patent protected and scientifically validated technique that can generate AI-powered functional soundscapes. Ultimately, this resulted in two companies. One is an ecosystem of direct-to-consumer products—apps that people can subscribe to access these soundscapes. Our personalized and real-time customizable soundscapes are available on different platforms.

Endel’s second mainstay is the use of our technology to create static recordings, which are then distributed on various digital streaming platforms. An example of this is Amazon Music, who commissioned us to create a playlist of soundscapes to help you fall asleep. We’ll also be collaborating and producing content with some of the biggest music companies in the world, including Universal Music, with whom we recently announced a partnership. So to sum it up: We see ourselves as a technology-driven functional music company that offers both real-time content and static soundscapes.

Then what do people use these sounds for?

When you want to concentrate, relax or sleep, also for specific activities. Of course, within these basic cognitive states there are a lot of smaller use cases that we cover. When one speaks of concentration, it is then, for example, about so-called “deep work”, which is called deep work in English. Or you need sounds to help you complete a short sprint session or better understand a study read. But it’s also about yoga, meditation, power naps, supporting longer sleep phases. That’s our focus today, but we’re coming out with more and more other use cases for our technique.

See also  In Brussels, a robot from Genoa asks Renzi: "Do you speak English?"

What exactly is functional music?

We define functional music as music or sounds designed to help the user reach a certain cognitive state. So these sounds are not meant to be really consciously consumed. So it’s a form of background music that affects our cognitive state or helps people achieve a certain cognitive state.

Oleg Stavitsky, CEO von Endel

Oleg Stavitsky, CEO von Endel.

(Image: Endel)

Endel emphasizes that the soundscapes generated in this way have been neuroscientifically validated. What does it mean exactly?

That means two things. First, the AI ​​model behind our system was developed based on neuroscientific insights. To do this, we work together with various brain researchers, sleep researchers and other experts. We have published a peer-reviewed white paper on effectiveness. It appeared in the journal Frontiers of Computational Neuroscience, a respected scientific journal.

Also, we are currently participating in a very large, multi-year project called “Lullabyte” which brings together five major European universities to study the effect of sound and music on sleep. Then there are literally people lying in sleep labs listening to AI-generated sounds while wearing EEG headbands. So we take the scientific validation of Endel very, very seriously. It should be demonstrated in a real laboratory environment.

How much AI is being used in the generation of your soundscapes today? Is there still a collaboration with “real” musicians?

That’s a bit of an unfair question, I would say. Every AI company in this area is still doing this. But it is about what role they play in concrete terms. Of course we no longer produce music by hand, where people sit down and compose whole pieces. Instead, our musicians create so-called stems, i.e. sounds that are between one and ten seconds long. These are then fed into our AI model and then adjusted using the internal tools.

Our algorithm analyzes the stems and synthesizes new stems from the source stems – depending on which modality the user has chosen, be it concentration, relaxation or sleep. Then click a button and the soundscape will be generated. You can then edit them with our tools and shape them in a way that you are satisfied with the result. Or you can click on a section and then apply additional filters and effects. And then the result is finally exported as a static soundscape or real-time soundscape.

Endel recently announced a collaboration with Universal Music. There you will also be able to use their catalog as a starting point for functional music. How do you imagine that?

It works exactly as I described: we get stems, which are the tracks, from a specific song or an entire album from the Universal Music catalogue. Then these stems are analyzed by our technology and new ones are created from the original stems. Then they are stitched together and overlaid with some post-processing effects. This creates a new soundscape, which is then exported as a static recording and delivered to Universal Music for distribution. So basically we’re going to produce functional soundscape versions of the Universal Music catalogue. Imagine then listening to sounds from The Weekend to fall asleep or listening to music by Drake to focus at work.

How do musicians react to this collaboration? Do they feel like it’s some kind of remix?

Most musicians are excited about it because it gives them the opportunity to explore new audiences and markets and experiment a bit with their sound without explicitly positioning it as the artist’s new album. So yes, it’s sort of a remix or reinterpretation of the original work. Will we have access to the stems of all their music? Not all of them, but we’ll get access to the stems that we can turn into functional soundscapes.

The interview was conducted via voicemail and edited for length and clarity.




(bsc)

To home page

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy