Society 5.0

The idea behind Society 5.0 is to create a super-intelligent society in which various social challenges are solved by implementing innovations of the fourth industrial revolution — such as IoT, Big Data, Artificial Intelligence (AI), robotics, or the sharing economy — into every industry and social life. In such a world, people, machines and their environment are interconnected and able to communicate with each other [1]. In practice, Society 5.0 will, among other things, seek to provide better care for seniors — in Japan, the population is ageing rapidly, and if there were ever to be a shortage of hands to care for the elderly in the future, it is the new quality of computing that will be able to raise the standard of healthcare for retirees [2]. Society 5.0 is a term that refers to a new society in which technological developments are human-centred and seek valuable solutions for the lives of people around the world.

Solutions for Better Human Life

Fig. 1. Illustration of Japan’s social transformation plan — Society 5.0. 

[Accessed: 7 March 2024]. 

History of the Development of Society

Society 5.0 is the result of nothing more than an evolution spanning five stages of social development: 

  • Society 1.0: Gatherer-hunter society (the way of life of the first humans, which lasted until about 12,000 years ago) — a society that based its lifestyle on hunting for animals and searching for wild vegetation and other types of nutrients [3]. 
  • Society 2.0: Agricultural society (first appears around 10,000–8,000 years ago) — a society that focuses its economy primarily on agriculture and the cultivation of large fields [4]. 
  • Society 3.0: Industrial society (from the late 18th century onwards) — a society in which the dominant way of organising life is through mass production technologies, used to produce immense quantities of goods in factories [5]. 
  • Society 4.0: Information society (since the second half of the 20th century) — a society in which the creation, dissemination, use, integration and management of information is an essential aspect of economic, political or cultural activities [6]. 

Technological Integration for a Better Quality of Life

The concept of collecting data from the world around us, processing it by computers and putting it to practical use is not new in today’s world. The operation of air conditioners, for example, is based on exactly this principle. They regularly measure the temperature in a room and then compare the reading with a pre-programmed temperature. Depending on whether the measured temperature is higher or lower than the one originally set, the device pauses or starts the airflow. This mechanism uses automated computer systems. The term ‘information society’ (Society 4.0) therefore refers to a society in which each such system acquires data, processes it and then uses it in its own specified environment.

Now, knowing exactly what the idea of Society 4.0 is, we can understand what distinguishes it from Society 5.0. The fundamental difference is that Society 5.0, instead of using systems that operate in a defined, limited way, will use systems that operate in an integrated way, affecting the life of society as a whole. Data will be processed by advanced information systems, such as Artificial Intelligence, as these systems are adapted to process such large amounts of data. The main purpose of using the collected data will be to ensure everyone’s happiness and comfort [7]. At BFirst.Tech, we also see these needs and respond to them with specific tools. Our areas — Data Engineering and Data Architecture & Management use innovative technological solutions to collect, analyse and manage data to support efficient and sustainable process management. This type of management has a significant impact on security, data reliability and strategic decision-making, which contributes to the prosperity of society.

The New Era of Prosperity and the Challenges It Faces

Society 5.0 aims to use state-of-the-art technology in such a way as to ensure the well-being of all people. The idea is that technological development can be a tool to address social inequalities, improve quality of life and create a more sustainable community. The main objectives it envisages are:

  • reducing social inequalities, 
  • speeding up medical services and increasing the precision of medical procedures and operations, 
  • increasing food production while reducing waste 
  • improving public safety 
  • solving problems caused by natural disasters, 
  • promoting public participation in the development of ideas and projects, 
  • ensuring transparent access to data and ensuring information security. 

Society 5.0 aims to create a harmonious balance between technological development and societal needs, but this brings its own challenges. One of the most crucial conditions for this vision’s successful implementation is the commitment and leadership of governments. This is because governments are responsible for aspects such as funding, the implementation of technology in public life or the creation of new security-related legislation. Cybersecurity risks are another significant challenge. It is important to bear in mind that the actions of hackers, or issues related to data theft, can effectively hinder the development of innovation, so it is crucial to ensure a sound level of data protection [8].

The United Nations Sustainable Development Goals

Society 5.0 and the United Nations Sustainable Development Goals are two separate initiatives that are moving in a very similar direction. Indeed, these two innovative approaches share one common goal — to eliminate social problems sustainably. It can be said that Society 5.0 will, in a way, realise the Sustainable Development Goals, through specific actions. These actions, matched with specific goals, are:

  • aiming for more accurate and efficient diagnosis of diseases through the use of advanced technologies (such as Big Data and Artificial Intelligence),
Illustration of UN Sustainable Development Goal 3.

Fig. 2. Illustration of UN Sustainable Development Goal 3. 


  • disseminating e-learning and making education more accessible,
Illustration of UN Sustainable Development Goal 4.

Fig. 3. Illustration of UN Sustainable Development Goal 4. 


  • creation of new jobs related to fields such as robotics, Artificial Intelligence or data analytics,
Illustration of UN Sustainable Development Goal 8.

Fig. 4. Illustration of UN Sustainable Development Goal 8. 


  • promoting innovation and investing in new infrastructure (such as smart networks or high-speed internet),
Illustration of UN Sustainable Development Goal 9.

Fig. 5. Illustration of UN Sustainable Development Goal 9. 


  • creating smart cities that use sensors and data analysis to optimise traffic flow, reduce energy consumption and improve safety, 
Illustration of UN Sustainable Development Goal 11.

Fig. 6. Illustration of UN Sustainable Development Goal 11. 


  • Reducing greenhouse gas emissions and promoting sustainable transport.
Illustration of UN Sustainable Development Goal 13.

Fig. 7. Illustration of UN Sustainable Development Goal 13.


Common Direction

It is crucial that the benefits of Society 5.0 are equally available to everyone, so that everyone has the same opportunity to benefit from their potential. Only with such an approach can Society 5.0’s contribution to the Sustainable Development Goals have a chance of an effective outcome [9]. BFirst.Tech, as a substantive partner of the United Nations Global Compact Network Poland (UN GCNP), is also concerned with the implementation of the Sustainable Development Goals, through the specific activities it undertakes. In areas that focus on data processing, design and management, namely Data Engineering and Data Architecture & Management, our company implements goals that overlap with those targeted by Society 5.0, such as Goal 9 — on securing, aggregating and analysing big data, optimising and managing and controlling the quality of processes using AI; Goal 11 — on securing critical information that impacts on improving the lives of urban residents; and Goal 13 — on reducing resource consumption and waste emissions by increasing production efficiency.

Changes Affecting Numerous Areas

With the implementation of the Society 5.0 concept, many various facets of society can be modernised. As mentioned earlier, one of these is healthcare. With Japan’s ageing population, the country is currently grappling with rising expenses and the need to care for seniors. Society 5.0 solves this problem by introducing Artificial Intelligence, which collects and then analyses patient data to provide the highest level of diagnosis and treatment. Remote medical consultations, in turn, positively impact the convenience of the elderly, giving them the possibility of contacting a doctor even from their own place of residence.

Another facet is mobility. Most rural areas of Japan do not have access to public transport, influenced in part by a declining population contributing to an increasingly sparsely populated area. The growing shortage of drivers, linked to the ever-expanding e-commerce sector, is also a problem. The solution that Society 5.0 proposes to these issues is the implementation of autonomous vehicles such as taxis and buses. What is also worth mentioning is the area of infrastructure. In Society 5.0, it will involve sensors, AI and robots that will autonomously control and maintain roads, tunnels, bridges and dams. The final area worth mentioning is financial technology (FinTech). In Japan, the majority of monetary transactions are still carried out using cash or banking procedures, which can take far too long. Society 5.0 proposes the implementation of Blockchain technology for monetary transactions and the introduction of universal smartphone payments available everywhere [10]. 


Society 5.0 is the concept of a society that uses advanced technologies to create a society based on sustainability, social innovation and digital transformation. The aim of Society 5.0 is not only to achieve economic growth, but also to improve the quality of life of citizens. There are also some challenges behind the development of this idea, mainly related to data security, or the introduction of appropriate regulations to ensure a transition that will be smooth and comfortable for all. Society 5.0 largely shares a vision of the future with the Sustainable Development Goals (SDGs) announced by the United Nations — many of the SDG targets can be achieved through the implementation of this concept. Society 5.0 encompasses a wide range of areas of society, including healthcare, mobility, infrastructure and financial technology. Through advanced technologies in these areas, the aim is to create a sustainable and innovative society that will positively impact citizens’ quality of life.


[1] [Accessed: 7 March 2024]. 






[7] Atsushi Deguchi, Chiaki Hirai, Hideyuki Matsuoka, Taku Nakano, Kohei Oshima, Mitsuharu Tai, Shigeyuki Tani “What is Society 5.0?” 





What makes some websites appear immediately after entering a search query, while others disappear in the midst of other sites? How can we make it easier for users to find our website? SEO is responsible for these and other aspects, and it has nothing to do with randomness.  Whether you are just starting your journey with running a website or have been doing it for a long time, whether you handle everything yourself or delegate it to someone else, it’s important to know the basic principles of SEO. After reading this article, you will learn what SEO is, what it consists of, and how to use it properly. 

What is SEO?

Let’s start with what SEO actually is and what it consists of. SEO (Search Engine Optimization) is a set of activities undertaken to improve the positioning of a website in search results [1]. It consists of various practices and strategies, such as proper text editing and building a link profile. SEO also involves adapting the website to algorithms used by search engines. These algorithms determine which pages will be displayed on the first page of search results and in what order. Through optimization, a website can gain a better position in the search results, which increases its visibility.

It is important to remember, of course, that SEO tools are only one way to improve the popularity of a website. It doesn’t produce results as quickly as, for example, paid advertising, but it’s relatively inexpensive. Furthermore, the achieved effect will last longer and won’t disappear after a subscription expires, as is the case with many other marketing techniques.

On-site positioning

We can divide SEO into two types: on-site and off-site. On-site SEO includes all activities that take place on a specific website. These are all editorial, technical, or other issues that affect content loading speed. By taking care of these aspects, the website is more readable for both the user and Google’s robots. Good on-site SEO requires attention to:

  • Metadata and ALT description – even if a page is readable for users, what about search engine algorithms? To make it readable for them as well, it’s worth taking care of meta titles and descriptions, which will help search engines find our website. In addition, it is also worth taking care of ALT descriptions, also known as alternative text. Algorithms don’t understand what’s in images. With this short description, they will be able to assign its content to the searched phrase and improve positioning. 
  • Header – this is another thing that affects more than just human perception. Proper distribution of headers and content optimization in them can significantly contribute to improved positioning. 
  • Hyperlinks – the set of links, also known as the link profile. Here we can distinguish between external and internal linking. External linking refers to links coming from websites other than our own and is considered off-site SEO. On the other hand, internal linking refers to links within a single website that redirect users to other tabs or articles. 

Off-site positioning

Off-site SEO refers to all activities undertaken outside the website to increase its visibility and recognition on the web. This helps generate traffic to the site from external sources. Such activities include:

  • Hyperlinks – again, a link profile that builds a site’s popularity and recognition on the web. Off-site SEO includes external linking, i.e. from other sources. It is worth ensuring that these are of good quality, i.e. from reliable sources. Gone are the days when only quantity mattered. Nowadays, search engine algorithms pay much more attention to value.
  • Internet marketing – this includes activities such as running profiles on social media, engaging in discussions with users on forums, or collaborating with influencers. These aspects do not directly affect search results but can indirectly contribute a great deal to boosting the number of queries about our website. 
  • Reviews – after some time, opinions about a website or business naturally appear on the web. It’s worth taking care of them and responding to users who leave them. Maintaining a good customer opinion is one aspect of building a trustworthy brand image [3].

Link building and positioning

Link building is the process of acquiring links that will lead to our website. These can be links from external sources (so-called backlinks) or internal linking. In that case, we are talking about links that will redirect us within a given website. A well-built link profile significantly affects positioning, as discussed above [4]. However, how has the significance of such practices changed? 

For many years, Google allowed SEO practitioners a lot of leeway in this regard. It was commonplace to encounter sites that had hundreds of thousands of links leading to them because the number of links had a significant impact on positioning, and their quality was not as crucial. The vast majority of these were low-quality links, which were posted online in forums, guestbooks, directories, comments, etc. This was often not handled by a human, but special applications were used that did it automatically. This approach brought significant results and could be carried out relatively inexpensively. But not for long. This all changed in April 2012. There was a kind of revolution back then – Google introduced a new algorithm called Penguin.

How did Penguin change SEO?

What is Penguin? It is an algorithm created by Google and introduced on 24th April 2012, to combat unethical SEO practices. SEO specialists tried to trick Google’s script by buying links and placing them in inappropriate places, but Penguin effectively caught them. 

Let’s try to answer how Penguin works. This script analyses the links leading to a particular website and decides on their value. If it deems them to be of low quality, it will lower the rankings of the sites they lead to. Such links include purchased ones (also from link exchanges) or those created by bots. It will also do the same for spam links, such as those placed in forum comments or on completely unrelated websites. However, its action is not permanent – when low-quality links are removed, a given website can regain its position. It’s worth mentioning that Penguin was not created only to detect fraud and reduce the visibility of websites in search results. Its role is also to reward honestly conducted websites. If it deems the link profile valuable, it will increase the visibility of such sites [6].

Ethical and unethical positioning

Depending on what we base our SEO techniques on, a distinction can be made between White Hat SEO and Black Hat SEO. These terms allude to the good and evil characters in western tales. According to culturally accepted convention, the characters usually wore white and black hats respectively, hence the association. But what do they mean and how do these techniques differ?

White Hat SEO is ethical SEO, applied according to guidelines recommended by search engines. It involves procedures such as creating good quality content (free of duplicates). Using headings, bullet points and ensuring paragraphs are the right length is also important. Black Hat SEO, on the other hand, is characterized by unethical behavior aimed at artificially boosting popularity. These include practices such as overusing key phrases out of context, hiding text or buying links. Such actions can result in a decrease in trust in the site and the imposition of filters lowering its position. Even exclusion from search results is possible[7].


The key to increasing traffic to a website and improving its positioning is the skilful use of SEO tools. These are both on-site and off-site techniques that can significantly increase reach. When using SEO, it is important to remember to do it properly. By following the recommendations of search engines and adapting the content to both the user and the algorithms, we can count on positive results and improved statistics. Unethical practices, on the other hand, can lead to the opposite effect.









Moral dilemmas associated with Artificial Intelligence

Artificial intelligence is one of the most exciting technological developments of recent years. It has the potential to fundamentally change the way we work and use modern technologies in many areas. We talking about text and image generators, various types of algorithms or autonomous cars. However, as the use of artificial intelligence becomes more widespread, it is also good to be aware of the potential problems it brings with it. Given the increasing dependence of our systems on artificial intelligence, how we approach these dilemmas could have a crucial impact on the future image of society. In this article, we will present these moral dilemmas. We will also discuss the problems associated with putting autonomous vehicles on the roads. Next we will jump to the dangers of using artificial intelligence to sow disinformation. Finaly, it will come to te concerns about the intersection of artificial intelligence and art.

The problem of data acquisition and bias

As a rule, human judgements are burdened by a subjective perspective; machines and algorithms are expected to be more objective. However, how machine learning algorithms work depends heavily on the data used to teach the algorithms. Therefore, data selected to train an algorithm with any, even unconscious bias, can cause undesirable actions by the algorithm. Please have a look at our article for more information on this topic.

Levels of automation in autonomous cars

In recent years, we have seen great progress in the development of autonomous cars. There has been a lot of footage on the web showing prototypes of vehicles moving without the driver’s assistance or even presence. When discussing autonomous cars, it is worth pointing out that there are multiple levels of autonomy. It is worth identifying which level one is referring to before the discussion. [1]

  • Level 0 indicates vehicles that require full control of the driver, performing all driving actions (steering, braking acceleration, etc.). However, the vehicle can inform the driver of hazards on the road. It will use systems such as collision warning or lane departure warnings to do so. 
  • Level 1 includes vehicles that are already common on the road today. The driver is still in control of the vehicle, which is equipped with driving assistance systems such as cruise control or lane-keeping assist. 
  • Level 2, in addition to having the capabilities of the previous levels, is – under certain conditions – able to take partial control of the vehicle. It can influence the speed or direction of travel, under the constant supervision of the driver. The support functions include controlling the car in traffic jams or on the motorway. 
  • Level 3 of autonomy refers to vehicles that are not yet commercially available. Cars of this type are able to drive fully autonomously, under the supervision of the driver. The driver still has to be ready to take control of the vehicle if necessary. 
  • Level 4 means that the on-board computer performs all driving actions, but only on certain previously approved routes. In this situation, all persons in the vehicle act as passengers. Although, it is still possible for a human to take control of the vehicle. 
  • Level 5 is the highest level of autonomy – the on-board computer is fully responsible for driving the vehicle under all conditions, without any need for human intervention. [2] 

Moral dilemmas in the face of autonomous vehicles

Vehicles with autonomy levels 0-2 are not particularly controversial. Technologies such as car control on the motorway are already available and make travelling easier. However, the potential introduction of vehicles with higher autonomy levels into general traffic raises some moral dilemmas. What happens when an autonomous car, under the care of a driver, is involved in an accident. Who is then responsible for causing it? The driver? The vehicle manufacturer? Or perhaps the car itself? There is no clear answer to this question.

Putting autonomous vehicles on the roads also introduces another problem – these vehicles may have security vulnerabilities. Something like this could potentially lead to data leaks or even a hacker taking control of the vehicle. A car taken over in this way could be used to deliberately cause an accident or even carry out a terrorist attack. There is also the problem of dividing responsibility between the manufacturer, the hacker and the user. [3]

One of the most crucial issues related to autonomous vehicles is the ethical training of vehicles to make decisions. It is expecially important in the event of danger to life and property. Who should make decisions in this regard – software developers, ethicists and philosophers, or perhaps country leaders? These decisions will affect who survives in the event of an unavoidable accident. Many of the situations that autonomous vehicles may encounter will require decisions that do not have one obvious answer (Figure 1). Should the vehicle prioritise saving pedestrians or passengers, the young or the elderly? How important is it for the vehicle not to interfere with the course of events? Should compliance with the law by the other party to the accident influence the decision? [4]

An illustration of one of the situations that autonomous vehicles may encounter

Fig. 1. An illustration of one of the situations that autonomous vehicles may encounter. Source:  

Deepfake – what is it and why does it lead to misinformation?

Contemporary man using modern technology is bombarded with information from everywhere. The sheer volume and speed of information delivery means that not all of it can be verified. This fact enables those fabricating fake information to reach a relatively large group of people. This allows them to manipulate their victims into changing their minds about a certain subject or even attempt to deceive them. Practice like this has been around for some time but it did not give us such moral dilemmas. The advent of artificial intelligence dramatically simplifies the process of creating fake news and thus allows it to be created and disseminated more quickly.

Among disinformation techniques, artificial intelligence has the potential to be used particularly effectively to produce so-called deepfakes. Deepfake is a technique for manipulating images depicting people, relying on artificial intelligence. With the help of machine learning algorithms, modified images are superimposed on existing source material. Thereby, it is creating realistic videos and images depicting events that never took place. Until now, the technology mainly allowed for the processing of static images, and video editing was far more difficult to perform. The popularisation of artificial intelligence has dissolved these technical barriers, which has translated into a drastic increase in the frequency of this phenomenon. [5]

Video 1. Deepfake in the form of video footage using the image of President Obama.

Moral dilemmas associated with deepfakes

Deepfake could be used to achieve a variety of purposes. The technology could be used for harmless projects, for example educational materials such as the video showing President Obama warning about the dangers of deepfakes (see Figure 2). Alongside this, it finds applications in the entertainment industry, such as the use of digital replicas of actors (although this application can raise moral dilemmas), an example of which is the use of a digital likeness of the late actor Peter Cushing to play the role of Grand Moff Tarkin in the film Rogue One: A Star Wars Story (see Figure 2).

A digital replica of actor Peter Cushing as Grand Moff Tarkin

Fig. 2. A digital replica of actor Peter Cushing as Grand Moff Tarkin. Source: 

However, there are also many other uses of deepfakes that have the potential to pose a serious threat to the public. Such fabricated videos can be used to disgrace a person, for example by using their likeness in pornographic videos. Fake content can also be used in all sorts of scams, such as attempts to extort money. An example of such use is the case of a doctor whose image was used in an advertisement for cardiac pseudo-medications, which we cited in a previous article [6]. There is also a lot of controversy surrounding the use of deepfakes for the purpose of sowing disinformation, particularly in the area of politics. Used successfully, fake content can lead to diplomatic incidents, change the public’s reaction to certain political topics, discredit politicians and even influence election results. [7]

By its very nature, the spread of deepfakes is not something that can be easily prevented. Legal solutions are not fully effective due to the global scale of the problem and the nature of social network operation. Other proposed solutions to the problem include developing algorithms to detect fabricated content and educating the public about it.

AI-generated art

There are currently many AI-based text, image or video generators on the market. Midjourney, DALL-E, Stable Diffusion and many others, despite the different implementations and algorithms underlying them, have one thing in common – they require huge amounts of data which, due to their size, can be obtained only from the Internet – often without the consent of the authors of these works.  As a result, a number of artists and companies have decided to file lawsuits against the companies developing artificial intelligence models. According to the plaintiffs, the latter are illegally using millions of copyrighted images retrieved from the Internet. Up till now, he most high-profile lawsuit is the one filed by Getty Images – an agency that offers images for business use – against Stability AI, creators of the open-source image generator Stable Diffusion. The agency accuses Stability AI of copying more than 12 million images from their database without prior consent or compensation (see Figure 3). The outcome of this and other legal cases related to AI image generation will shape the future applications and possibilities of this technology. [8]

An illustration used in Getty Images' lawsuit showing an original photograph and a similar image with a visible Getty Images watermark generated by Stable Diffusion. Graphic shows football players during a match.

Fig. 3. An illustration used in Getty Images’ lawsuit showing an original photograph and a similar image with a visible Getty Images watermark generated by Stable Diffusion. Source:  

In addition to the legal problems of training generative models on the basis of copyrighted data, there are also moral dilemmas about artworks made with artificial intelligence. [9]

Will AI replace artists?

Many artists believe that artificial intelligence cannot replicate the emotional aspects of art that works by humans offer. When we watch films, listen to music and play games, we feel certain emotions that algorithms cannot give us. They are not creative in the same way that humans are. There are also concerns about the financial situation of many artists. These occur both due to not being compensated for the created works that are in the training collections of the algorithms, and because of the reduced number of commissions due to the popularity and ease of use of the generators. [10]

On the other hand, some artists believe that artificial intelligence’s different way of “thinking” is an asset. It can create works that humans are unable to produce. This is one way in which generative models can become another tool in the hands of artists. With them they will be able to create art forms and genres that have not existed before, expanding human creativity.

The popularity and possibilities of generative artificial intelligence continue to grow. Consequently, there are numerous debates about the legal and ethical issues surrounding this technology. It has the potential to drastically change the way we interact with art.


The appropriate use of artificial intelligence has the potential to become an important and widely used tool in the hands of humanity. It has the potential to increase productivity, facilitate a wide range of activities and expand our creative capabilities. However, the technology carries certain risks that should not be underestimated. Reckless use of autonomous vehicles, AI art or deepfakes can lead to many problems. These can include financial or image losses, but even threats to health and life. Further developments of deepfake detection technologies, new methods of recognising disinformation and fake video footage, as well as new legal solutions and educating the public about the dangers of AI will be important in order to reduce the occurrence of these problems.












Artificial intelligence and voice creativity

Artificial intelligence and voice creativity


Artificial intelligence (AI) has recently ceased to be a catchphrase that belongs in science-fiction writing and has become part of our reality. From all kinds of assistants to text, image, and sound generators, the machine and the responses it produces have made their way into our everyday lives. Are there any drawbacks to this situation? If so, can they be counterbalanced by benefits? This post addresses these questions and other dilemmas related to the use of AI in areas involving the human voice. 

How does artificial intelligence get its voice? The development of AI voices encompasses a number of cutting-edge areas, but the most commonly used methods include  


  • machine learning algorithms that allow systems to learn from data and improve their performance over time. Supervised learning is often employed to train AI voice models using large data sets related to human speech. With supervised learning, an AI model learns to recognise patterns and correlations between text input and corresponding voice messages. The AI learns from multiple examples of human speech and adjusts its settings so that the output it generates is as close as possible to real human speech. As the model processes more data, it refines its understanding of phonetics, intonation, and other speech characteristics, which results in increasingly natural and expressive voices;  


  • natural language processing (NLP) enables machines to understand and interpret human language. Applying NLP techniques allows artificial intelligence to break down written words and sentences to find important details such as grammar, meaning, and emotions. NLP allows AI voices to interpret and speak complex sentences, even if the words have multiple meanings or sound the same. Thanks to this, the AI voice sounds natural and makes sense, regardless of the type of language used. NLP is the magic that bridges the gap between written words and speech, making AI voices sound like real people, even when complex language patterns are involved.  


  • Speech synthesis techniques allow machines to transform processed text into intelligible and expressive speech. This can be done in a variety of ways, for example, by assembling recorded speech to form sentences (concatenative synthesis) or using mathematical models to create speech (parametric synthesis), which allows for greater customisation. Recently, a breakthrough method called neural TTS (Text-to-Speech) has emerged. It uses deep learning models, such as neural networks, to generate speech from text. This technique makes AI voices sound even more natural and expressive, capturing the finer details, such as rhythm and tone, that make human speech unique.  



In practice, the available tools can be divided into two main categories:  Text-to-Speech and Voice-to-Voice. Each allows you to clone a person’s voice, but TTS is much more limited when it comes to reproducing unusual words, noises, reactions, and expressing emotions. Voice-to-Voice, put simply, “replaces” the sound of one voice with another, making it possible, for example, to create an artificial performance of one singer’s song by a completely different singer, while Text-to-Speech uses the created voice model to read the input text (creating a spectrogram from the text and then passing it to a vocoder, which generates an audio file) [1]. As with any machine learning issue, the quality of the generated speech depends to a large extent on the model and the data on which the model was trained.  

While the beginnings of the research on human speech can be traced back to as early as the late 18th century, work on speech synthesis gained momentum much later, in the 1920s-30s, when the first vocoder was developed at Bell Labs [2]. The issues related to voice imitation and cloning (which is also referred to as voice deepfakes) were first addressed on a wider scale in a scientific paper published in 1997, while the fastest development of the technologies we know today occurred after 2010. The specific event that fuelled the popularity and availability of voice cloning tools was Google’s publication of the Tacotron speech synthesis algorithm in 2017 [3].   


Artificial intelligence can already “talk” to us in many daily life situations; virtual assistants like Siri or Alexa found in devices and customer service call machines encountered in various companies and institutions are already widespread. However, the technology offers opportunities that could cause problems, raising controversy about the ethics of developing it in the future. 

At the forefront here are the problems raised by voice workers, who fear the prospect of losing their jobs to machines. For these people, apart from being part of their identity, their voice is also a means of artistic expression and a work tool. If a sufficiently accurate model of a person’s voice is created, then suddenly, at least in theory, that person’s work becomes redundant. This very topic was the subject of a discussion that ignited the Internet in August 2023, when a YouTube creator posted a self-made animation produced in Blender, inspired by the iconic TV series Scooby-Doo [4]. The controversy was caused by the application of AI by the novice author to generate dialogues for the four characters featured in the cartoon, using the voice models of the original cast (who were still professionally active). A wave of criticism fell on the artist for using someone else’s voice for his own purposes, without permission. The issue was discussed among animation professionals, and one of the voice actresses from the original cast of the series also commented on it. She expressed her outrage, adding that she would never work with this artist and that she would warn her colleagues in the industry against him. As the artist published an apology (admitting his mistake and explaining that his actions were motivated by the lack of funds to hire voice-overs and the entirely amateur and non-profit nature of the animation he had created), the decision to blacklist him was revoked and the parties reconciled. However, what emerged from the discussion was the acknowledgment that the use of artificial intelligence for such purposes needs to be legally regulated. The list of professions affected by this issue is long, and there are already plenty of works using people’s voices in a similar way. Even though this is mostly content created by and for fans paying a kind of tribute to the source material, technically speaking, it still involves using part of someone’s identity without their permission. 


Another dilemma has to do with the ethical concerns that arise when someone considers using the voice of a deceased person to create new content. The Internet is already full of “covers” in which newly released songs are “performed” by deceased artists. This is an extremely sensitive topic, considering the feelings of the family, loved ones, and fans of the deceased person, as well as how the deceased person would feel knowing that part of their image was used this way.  

Another danger is that the technology may be used for the purposes of deception and misrepresentation. While remakes featuring politicians playing multiplayer games remain in the realm of innocent jokes, putting words that the politicians have never said into their mouths, for example, during an election campaign, is already dangerous and can have serious consequences for society as a whole. Currently, the elderly are particularly vulnerable to such fakes and manipulation, however, with the improvement of models and the parallel development of methods for generating images and mouth movements, even those who are familiar with the phenomenon may find it increasingly difficult to tell the difference between what is false and what is real [5].  

In the worst-case scenario, such deceptions can result in identity theft. From time to time, we learn about celebrities appearing in advertisements that they have never heard of [6]. Experts and authorities in specific fields, such as doctors, can also fall victim to this kind of identity theft when their artificially created image is used to advertise various preparations that often have nothing to do with medicine. Such situations, already occurring in our country [7], are particularly harmful, as potential recipients of such advertisements are not only exposed to needless expenses but also risk their health and potentially even their lives. Biometric verification by voice is also quite common. If a faithful model of a customer’s voice is created and there is a leak of his or her personal data, the consequences may be disastrous. The risk of such a scenario has already materialised for an application developed by the Australian government [8]. 


It is extremely difficult to predict in what direction the development of artificial intelligence will go with regard to human voice generation applications. It seems necessary to regulate the possibility of using celebrity voice models for commercial purposes and to ensure that humans are not completely replaced by machines in this sphere of activity. Failure to make significant changes in this matter could lead to a further loss of confidence in tools using artificial intelligence. This topic is divisive and has many supporters as well as opponents.  Like any tool, it is neither good nor bad in itself – rather, it all depends on how it is used and on the user’s intentions. We already have tools that can detect whether a given recording has been artificially generated. We should also remember that it takes knowledge, skill, and effort to clone a human voice in a convincing way. Otherwise, the result is clumsy and one can immediately tell that something is not right. This experience is referred to as the uncanny valley. The subtleties, emotions, variations, accents, and imperfections present in the human voice are extremely difficult to reproduce. This gives us hope that machines will not replace human beings completely, and this is only due to our perfect imperfection.

Problems in historical data and coded bias

Prater & Borden


In 2014, Brisha Borden, 18, was charged for committing theft of property worth eighty dollars after she decided to ride a child’s bicycle that had been left abandoned and unsecured. Brisha has committed lesser offences in the past as a juvenile.


A year earlier, forty-one year old Vernon Prater was caught stealing tools from a shop with a total value of $86.35. Vernon had already been charged with armed robbery, for which he received a five-year prison sentence. He was also charged with attempted armed robbery.


In the USA at the time, a risk prediction system was used to assess whether a person would commit other crimes in the future. This system gave a rating from 1 to 10, where the higher the numerical value, the higher the risk of committing crimes in the future. Borden – a black teenager – was given a high risk rating: 8, and Prater, on the other hand – a white, adult male – a low risk rating: 3. After two years, Brisha Borden had committed no crime, while Vernon Prater was serving an eight-year prison sentence after breaking into a warehouse and stealing electronics worth several thousand dollars. [1]


Hidden data


Automated machine learning and big data systems are increasing in number in our daily lives. From algorithms suggesting a series for the user to watch, to one that will decide the instalment of your mortgage. However, the moment an algorithm decides on such an important issue for a human being, the dangers begin to emerge. Can we even trust such systems to make important decisions? Computer algorithms give a sense of impartiality and objectivity. But is this really the case?


In a nutshell, machine learning algorithms “learn” to make decisions based on the data provided. Regardless of the method of this learning, be it simple decision trees or more sophisticated artificial neural networks, by design the algorithm should extract patterns hidden in the data. Thus, the algorithm will only be as objective as the learning data is objective. While one might agree that, for example, medical or weather data are objective because the expected results are not the result of human decisions, decisions about, for example, the granting of credit or employment were historically made by people. Naturally, people are not fully objective and are guided by a certain worldview and, unfortunately, also by prejudices. These biases find their way into the data in a more or less direct way.


The issue of preparing data suitable for training machine learning algorithms is a very broad topic. A discussion of possible solutions is a topic for a separate article.

In this case, since we do not want the algorithm to make decisions based on gender, age or skin colour, is it not possible to simply not provide this data? This naive approach, while seeming logical, has one big loophole. Information about this sensitive data can be (and probably is) coded into other, seemingly unrelated information.


Historical data are created by people, and unfortunately people are guided by certain biases. These decisions percolate through the data, and even if when creating a model, one considers not to include data on race, age, gender, etc. in the input, it may be that this information gets through indirectly through, for example, postcode information. It may be possible, for example, to use Bayesian networks to visualise the interconnections between different features. This tool aims to show where data, based on which one would not want to make decisions, may be hidden. [2]


Judicial risk assessment system in the USA


Reference should again be made to the algorithm used in the US penal system (COMPAS system). Julia Dressel and Hany Farid [3] tried to investigate how this system works. First, they conducted a survey in which respondents with no background in criminology were given a brief description of the accused person’s crime (including their age and gender, but not their race) and a history of previous prosecutions, their aim was to predict whether the person would be convicted again in the next two years. The results of the survey conducted showed an efficiency (67%) similar to the system used by the US penal system (65.2%). Interestingly, the proportion of false-positive responses, i.e. where defendants were incorrectly assigned to a high-risk group, was consistent regardless of race. Black people, both in the anonymous survey and according to COMPAS, were more likely to be categorised in the higher risk group than white people. As a reminder – survey respondents had no information about the race of those accused.


Other machine learning methods were then tested, including a logistic regression algorithm with two features in the input – age and number of previous accusations. This algorithm works in such a way that individual measurements from the training dataset are placed on (in this case) a two-dimensional plane (each axis is the value of a given feature). A straight line is then drawn separating cases from two different categories. Usually, it is not possible to draw a perfect straight line that separates the two categories without error. Therefore, a straight line for which the error is minimal is determined. In this way, a straight line is obtained that divides the plane into two categories – those who have been charged within two years and those who have not been charged (Fig.1).

Fig.1 Mode of operation of the logistic regression algorithm.

This algorithm has an efficiency (66.8%) similar to COMPAS (65.4%). In this case too, a much higher proportion of black people incorrectly classified as higher risk than white people was observed.


As it turns out, information about race can also permeate the arrest rate data [2][3]. In the US, for example, black people are arrested for drug possession four times more often than white people [8][9].


Non-functioning models


Sometimes models just do not work.


In 2012, data from a rating system for New York City teachers from 2007 to 2010 was published. This system gave teachers a rating from 1 to 100 supposedly based on the performance of the teacher’s students. Gary Rubinstein [4] decided to look at the published data. The author noted that in the statistics, teachers who had been included in the rating programme for several years had a separate rating for each year. Based on the assumption that a teacher’s rating should not change dramatically from year to year, he decided to see how it changed in reality. Rubinstein outlined the teachers’ ratings, where on the X-axis he marked the first-year teaching rating and on the Y-axis the second-year teaching rating for the same class. Each dot on the graph represents one teacher (Fig.2).

analiza danych historycznych na wykresie z różowymi kwadratami
Fig.2 Graph of teacher ratings in two consecutive years. [4]

The logical result would be a near linear relationship or some other correlation, due to the fact that the results of the same class with one teacher should not change drastically from year to year. Here, the graph looks more like a random number generator, with some classes rated close to 100, the next year had a score close to 0 and vice versa. Such a result should not be generated by the system on the basis of which teachers’ salaries are set, or even whether to dismiss such a person, as this system simply does not work.


Face recognition algorithms have a similar problem. Typically, such technologies are set up so that a machine learning algorithm analyses multiple images that are a face and multiple images that represent something else. The system detects patterns that are characteristic of faces that are not present in other images. The problem starts when someone has a face that deviates from those present in the training dataset. Those creating such an algorithm should try to have as diverse a training dataset as possible. Unfortunately, it turns out that there is often an under-representation of people with darker skin colour in the training datasets. Those most often have a skin colour distribution similar to the society from which the data are collected. That is, if the training dataset consists of images of US and European citizens, for example, then the percentage of each skin colour in the dataset shall be similar to that of the US and European demographics, where light-skinned people predominate (Fig.3).

wykres słupkowy przedstawiający dane historyczne z podziałem na rasy
Fig.3 Left: US census data [6]. Right: percentage of races in publicly available datasets [7].

At MIT University [5], the accuracy of facial recognition algorithms by gender and skin colour was investigated. They found that the technologies of the most popular companies, such as Amazon and IBM, failed to recognise women with dark skin colour (Figure 4). When these technologies are used in products that use facial recognition technology, there is an issue of availability and security If the accuracy is low even for one specific group, there is a high risk of someone unauthorised to access, for example, a phone. At a time when facial recognition technology is being used by the police in surveillance cameras, there is a high risk that innocent people will be wrongly identified as wanted persons. Such situations have already occurred many times. All this due to a malfunctioning algorithm, which could quite easily be fixed with the right selection of training datasets.

wykres słupkowy przedstawiający dane historyczne z podziałem na przedsiębiorstwa
Fig. 4 Investigated accuracy of face recognition technology. [5] [5]

Following the publication of the MIT study, most companies have improved the performance of their algorithms so that the disparity in facial recognition is negligible.


Inclusive code


We cannot be 100 per cent trusting of machine learning algorithms and big data, especially when it comes to deciding human fate.


In order to create a tool that is effective, and does not learn human biases, one has to go down to the data level. It is necessary to analyse the interdependencies of attributes that may indicate race, gender or age and select those that are really necessary for the algorithm to work correctly. It is then essential to analyse the algorithm itself and its results to ensure that the algorithm is indeed objective.


Machine learning models learn by searching for patterns and reproducing them. When unfiltered historical data is provided, no new, more effective tools are actually created, but the status quo is automated. And when human fate is involved, we as developers cannot afford to repeat old mistakes.



Sight-playing – part 3

We already created the harmony of the piece in the previous article. What we need now is a good melody which will match this harmony. Melodies consist of motifs, i.e. small fragments of about 2-5 notes and their variations (transformations).

We will start by generating the first motif – its rhythm and sounds. As we did when generating the harmony, we will use N-gram statistics for musical pieces. Such statistics will be prepared using the Essen Folksong Collection base. You might as well use any other melody base, this choice will affect the type of melodies that will be generated. For each piece, we must isolate the melody, convert it into a sequence of rhythmic values and a sequence of sounds, and from these sequences extract the statistics. When compiling sound statistics, it is a good idea to first somehow prepare the melodies – transpose them all to two keys, e.g. C major and c minor. This will reduce the number of possible (probable) N-grams by 12 times and therefore the statistics will be better assessed.

A good motif

We will begin creating the first motif by generating its rhythm. Here, I would like to remind you that we have previously made a certain simplification – each motif and its variations will last exactly one bar. The subsequent steps for generating the rhythm of a motif: – we draw the first rhythmic value using unigrams, – we draw the next rhythmic value using bigrams and unigrams, – we continue to draw consecutive rhythmic values, using N-grams of increasingly higher level (up to 5-grams), – we stop until we reach a total rhythmic value equal to the length of one bar – if we have exceeded the length of 1 bar, we start the whole process from the beginning (such generation is fast enough that we can afford such a sub-optimal trial-and-error method).

The next step is to generate the sounds of the motif. Another simplification we made earlier is that we generate pieces only in C major key, so we will make use of the N-gram statistics created on the basis of pieces transposed to this key, excluding pieces in minor keys. The procedure is similar to that for generating rhythm: – we draw the first sound using unigrams, –we draw the next sound using bigrams and unigrams, – we continue until we have drawn as many sounds as we have previously drawn rhythmic values, – we check whether the motif matches the harmony, if not, we go back and start again – if after approx.

100 attempts we failed to generate a motif matching the harmony, this could mean that with the preset harmony and the preset motif there is a very low probability of drawing sounds that will match the harmony. In this case, we go back and generate a new motif rhythm.

Generate until you succeed

When generating both the motif rhythm and its sounds, we use the trial-and-error method. It will also be used in the generation of motif variations described below. Even if this method may seem “stupid”, it’s simple and it works. Although very often such randomly generated motifs don’t match the harmony, we can afford to make many such mistakes. Even 1000 attempts take a short time to calculate on today’s computers, and this is enough to find the right motif.

Variations with raepetitions

We have the first motif, and now need the rest of the melody. However, we will not continue to generate new motifs, as the piece would become chaotic. We also cannot keep repeating the same motif, as the piece would become too boring. A reasonable solution would be, in addition to repeating the motif, to create a modification of that motif, ensuring variation, but without making the piece chaotic.

There are many methods to create motif variations. One such method is chromatic transposition. It involves transposing all notes upward or downward by the same interval. This method can lead to a situation where a motif variation has sounds from outside the key of the piece. This, in turn, means that the probability that the variation will match the harmony is very low. Another method is diatonic transposition, whereby all notes are transposed by the same number of scale steps. Unlike the previous method, diatonic variations do not have off-key sounds.

Yet another method is to change a single interval; one of the motif intervals is changed, while all other intervals remain unchanged. That way, only one part of the motif (the beginning or the end) is transposed (via chromatic or diatonic transposition). Further methods are to convert two notes with the same rhythmic value to one or to convert one note to two notes with the same rhythmic value. For the first method, if the motif has two notes with the same rhythmic value, its rhythm can be changed by combining these two notes. For the second method, a note is selected at random and converted to two “shorter” notes.

Each of the described methods for creating variations makes it possible to generate different motifs. The listed methods are not the only valid methods; it is possible to come up with many more. The only restriction here is that the generated variation should not differ too much from the original motif. Otherwise, it would constitute a new motif rather than a variation. The border where the variation ends and a different motif begins is conventional in nature.

Etc., etc.

There are many more methods for generating motif variations; it is possible to come up with a lot of these. The only restriction is that the generated variation should not differ too much from the original motif. Otherwise, it would constitute a new motif rather than a variation. The border where the variation ends and another motif begins is rather conventional in nature and everyone “feels”, defines it a little differently.

Is that all?

That would be all when it comes to piece generation. Let us summarise the steps that we have taken:

  1. Generating piece harmony:
    • generating harmonic rhythm,
    • generating chord progression.
  2. Generating melody:
    • generating motif rhythm,
    • generating motif sounds,
    • creating motif variations,
    • creating motifs and variations “until it’s done”, that is, until they match the generated harmony

All that is left is to make sure that the generated pieces are of the given difficulty, i.e. matching the skills of the performer.

Controlling the difficulty

One of our assumptions was the ability to control the piece difficulty. This can be achieved via two approaches:

  1. generating pieces “one after another” and checking their difficulty levels (using the methods described earlier), thereby preparing a large database of pieces from which random pieces of the given difficulty will then be selected,
  2. controlling the parameters for creating the harmonies, motifs and variations in such a way that they generate musical elements of the given difficulty with increased frequency

Both methods are not mutually exclusive and thus can be successfully used together. First, a number of pieces (e.g. 1000) should be generated randomly, and then parameters should be controlled to generate further pieces (but only those which are missing). With respect to parameter control, it is worth noting that the probability of motif repetition can be changed. For pieces with low difficulty, the assigned probability will be higher (repetitions are easier to play). On the other hand, difficult pieces will be assigned lower probability and rarer harmonies (which will also force rarer motifs and variations).

Sight-playing part – 2

In the first part of the article, we have learned about many musical and technical concepts. Now it is time to use them to build an automatic composer.  Before doing so, however, we must make certain assumptions (or rather simplifications):

  • the pieces will consist of 8 bars in periodic structure (antecedent 4 bars, consequent 4 bars)
  • the metre will be 4/4 (i.e. 4 quarter notes to each bar, accent on the first and third measures of the bar)
  • the length of each motif is 1 bar (although this requirement appears restrictive, many popular pieces are built precisely from motifs that last 1 bar).
  • only C major key will be used (if necessary, we can always transpose the piece to any key after its generated)
  • we will limit ourselves to about 25 most common varieties of harmonic degrees (there are 7 degrees, but some of them have several versions, with additional sounds which change the chord colour).

What is needed to create a musical piece?

In order to automatically create a simple musical piece, we need to:

  • generate the harmony of a piece – chords and their rhythm
  • create motifs – their sounds (pitches) and rhythm
  • create variations of these motifs – as above
  • combinate the motifs and variations into a melody, matching them with the harmony

Having mastered the basics, we can move on to the first part of automatic composing – generating a harmony. Let’s start by creating a rhythm of the harmony.

Slow rhythm

Although one might be tempted to create a statistical model of the harmonic rhythm, unfortunately, (at least at the time of writing this article) there is no available base which would make this possible. Given the above, we must handle this in a different way – let’s come up with such a model ourselves. For this purpose, let’s choose a few “sensible” harmonic rhythms and give them some “sensible” probability.

[6, 2]0.1[2,1,1]0.02
[2, 6]0.1[3,1]0.02
[7, 1]0.02[1,1,1,1]0.02
Table 1. Harmonic rhythms, values expressed in quarter notes – [6, 2] denotes a rhythm in which there are two chords, the first one lasts 6 quarter notes, the second 2 quarter notes.

The rhythms in the table are presented in terms of chord duration, and the duration is shown in the number of quarter notes. Some rhythms last two bars (e.g. [8], [6, 2]), and others one bar ([4], [1, 1, 2] etc.).

Generating a rhythm of the harmony proceeds as follows. We draw new rhythms until we have as many bars as we needed (8 in our case). Sometimes certain complications may arise from the fact that the rhythms have different lengths. For example, there may be a situation where to complete the generation we need the last rhythm that lasts 4 quarter notes, but we draw one that lasts 8 quarter notes. In this case, in order to avoid unnecessary problems, we can force drawing from a subset of 4-quarter-note rhythms.

Then, in line with the above findings, let’s suppose that we drew the following rhythms:

  • antecedent: [4, 4], [2, 2], [3, 1], 
  • consequent: [3, 1], [8], [2, 2]


In the next step, we will be using the concept of likelihood. It is a probability not normalised to one (so-called pseudo-probability), which helps to assess the relative probability level of different events. For example, if the likelihoods of events A and B are 10 and 20 respectively, this means that event B is twice as likely as event A. These likelihoods might as well be 1 and 2 or 0.005 and 0.01. From the likelihoods, probability can be calculated. If we assume that only events A and B can occur, then their probability will be respectively:

Chord progressions

In order to generate probable harmonic flows, we will first prepare the N-gram models of harmonic degrees. To this end, we will use N-gram models available on github (

In our example, we will use 1-, 2-, 3-, 4- and 5-grams.

In the rhythm of the antecedent’s harmony, there are 6 rhythmic values, so we need to prepare the flow of 6 harmonic degrees. We generate the first chord using unigrams (1-grams). Now, we first prepare the likelihoods for each possible degree and then draw while taking these likelihoods into consideration. The formula for likelihood is quite simple in this case



  • X means any harmonic degree
  • p(X) is the probability of the 1-gram of X

In this case, we drew IV degree (in this key of F major).

We generate the second chord using bigrams and unigrams, with a greater weight for bigrams.

likelihoodX=weight2gramp(X v IV)+weight1gram*p(X)


  • p(X v IV) is the probability of the flow (IV, X)
  • weightNgram is the adopted N-gram weight (the greater the weight, the greater the impact of this N-gram model, and the smaller the impact of other models)

We can adopt N-gram weights as we wish. For this example, we chose the following:


The next chord we drew was: vi degree (a minor).

The generation of the third chord is similar, except that we can now use 3-grams:

likelihoodX=weight3gramp(X v IV, vi)+weight2gramp(X v IV)+weight1gram*p(X)

And so we continue until we have generated all the necessary chords. In our case, we drew:

IV, vi, I, iii, IV, vi (in the adopted key of C major these are, respectively, F major, a minor, C major, e minor, F major and a minor chords).

This is not a very common chord progression but, as it turns out, it occurs in 5 popular songs (


We were able to generate the rhythms and chords which are the components of the harmony of a piece. However, it should still be noted here that, for the sake of simplicity, we didn’t take into account two important factors:

  • The harmonic flows of the antecedent and consequent are very often linked in some way. The harmony of the consequent may be identical with that of the antecedent or perhaps slightly altered to create the impression that these two sentences are somehow linked.
  • The antecedent and consequent almost always end on specific harmonic degrees. This is not a strict rule, but some harmonic degrees are far more likely than others at the end of musical sentences.

For the purposes of the example, however, the task can be deemed completed. The harmony of the piece is ready, now we only need to create a melody to this harmony. In the next part of our article, you will find out how to compose such a melody.

Sight-playing — part 1

During their education, musicians need to acquire the ability to play a vista, that is, to play an unfamiliar piece of music without having a chance to get familiar with it beforehand. Thanks to this, virtuosos can not only play most pieces without preparation but also need much less time to learn the more demanding ones. However, it takes many a musical piece for one to learn how to play a vista. The pieces used for such practice should be little-known and matched to the skill level of the musician concerned. Therefore, future virtuosos must devote a lot of their time (and that of their teachers) to preparing such a playlist, which further discourages learning. Worse still, once used, a playlist is no longer useful for anything.

The transistor composer

But what if we had something that could prepare such musical pieces on its own, in a fully automated way? Something that could not only create the playlist but also match the difficulty of the musical pieces to the musician’s skill level. This idea paved the way for the creation of an automatic composer — a computer programme that composes musical pieces using artificial intelligence, which has been gaining popularity in recent times.

Admittedly, the word “composing” is perhaps somewhat of an exaggeration and the term “generating” would be more appropriate. Though, after all, composers create musical pieces based on their own algorithms. Semantics aside, what matters here is that such a (simple, for the time being) programme has been successfully created and budding musicians could benefit from it.

However, before we discuss how to generate musical pieces, let us first learn the basics of how musical pieces are structured and what determines their difficulty.

Fundamentals of music

The basic concepts in music include the interval, semitone, chord, bar, metre, musical scale and key of a musical piece. An interval is a quantity that describes the distance between two consecutive notes of a melody. Although its unit is the semitone, it is common practice to use the names of specific intervals. In contrast, a semitone is the smallest accepted difference between pitches (approximately 5%). While these differences can be infinitely small, it is simply that this division of intervals has become accepted as standard. A chord is three or more notes played simultaneously. The next concept is the bar, which is what lies between the vertical dashes on the stave. Sometimes a musical piece may begin with an incomplete bar (anacrusis).

Visualization of the anacrusis
Figure 1 Visualisation of an anacrusis

Metre — this term refers to how many rhythmic values are in one bar. In 4/4 metre, there should be four quarter notes to each bar. In 3/4 metre, there should be three quarter notes to each bar while 6/8 metre should have six eighth notes to each bar. Although 3/4 and 6/8 denote the same number of rhythmic values, these metres are different, the accents in them falling on different places in the bar. In 3/4 metre, the accent falls on the first quarter note (to put it correctly, “on the downbeat”). By comparison, in 6/8 metre, the accent falls on the first and fourth measures of the bar.

A musical scale is a set of sounds that define the sound material that musical works use. The scales are ordered appropriately — usually by increasing pitch. The most popular scales are major and minor. While many more scales exist, these two predominate in the Western cultural circle. They were used in most of the older and currently popular pieces. Another concept is key, which identifies the tones that musical pieces use. In terms of scale vs. key, scale is a broader term; there are many keys of a given scale, but each key has its own scale. The key determines the sound that the scale starts with.

Structure of a musical piece

In classical music, the most popular principle for shaping a piece of music is periodic structure. The compositions are built using certain elements, i.e. periods, which form a separate whole. However, several other concepts must be introduced to explain them.

motif is a sequence of several notes, repeated in the same or slightly altered form (variation) elsewhere in the work. Typically, the duration of a motif is equal to the length of one bar.

variation of a motif is a form of the motif that has been altered in some way but retains most of its characteristics, such as rhythm or a characteristic interval. musical pieces do not contain numerous motifs at once. A single piece is mostly composed of variations of a single motif. Thanks to this, each musical piece has a character of its own and does not surprise the listener with new musical material every now and then.

A musical theme is usually a sequence of 2-3 motifs that are repeated (possibly in slightly altered versions) throughout the piece. Not every piece of music needs to have a theme.

A sentence is two or more phrases.

A period is defined by the combination of two musical sentences. Below is a simple small period with its basic elements highlighted.

Scheme of the periodic structure of a musical piece
Figure 2 Periodic structure diagram of a musical piece

This is roughly what the periodic structure looks like. Several notes form a motif, a few motifs create a phrase, a few phrases comprise a sentence, a few sentences make up a period, and finally, one or more periods form a whole musical piece. There are also alternative methods of creating musical pieces. However, the periodic structure is the most common, and importantly in this case, easier to program.

Composing in harmony

Compositions are typically based on harmonic flows — chords that have their own “melody” and rhythm. The successive chords in the harmonic flows are not completely random. For example, the F major and G major chords are very likely to be followed by C major. By contrast, it is less likely to be followed by E minor and completely unlikely to be followed by Dis major. There are certain rules governing these chord relationships. However, we do not need to delve into them further since we will be using statistical models to generate song harmonies.

Instead, we need to understand what harmonic degrees are. Keys have several important chords called triads. Their basic sound, the root notes, are the subsequent notes of a given key. The other notes belong to this key, e.g. the first degree of the C major key is the C major chord, the second degree the D minor chord, the third degree the E minor chord, and so on. Harmonic degrees are denoted by Roman letters; major chords are usually denoted by capital letters and minor chords by small letters (basic degrees of the major scale: I, II, III, IV, V, VI, VII).

Harmonic degrees are such “universal” chords; no matter what tone the key starts with, the probabilities of successive harmonic degrees are the same. In the key of C major, the C – F – G – C chord sequence is just as likely as the sequence G – C – D – G in the key of G major. This example shows one of the most common harmonic flows used in music, expressed in degrees: I – IV – V – I

Melody sounds are not completely arbitrary; they are governed by many rules and exceptions. Below is an example of a rule and an exception in creating harmony:

  • Rule: for every measure of a bar, there should be a sound belonging to the given chord,
  • Exception: sometimes other notes that do not belong to the chord are used for a given measure of the bar; however, they are then followed relatively quickly by a note of this chord.

These rules and exceptions in harmony do not have to be strictly adhered to. However, if one does comply with them, there is a much better chance that one’s music will sound good and natural.

Factors determining the difficulty of a musical piece

Several factors influence the difficulty of a piece of music:

  • tempo — in general, the faster a musical piece is, the more difficult it gets, irrespective of the instrument, (especially when playing a vista)
  • melody dynamics — a melody consisting of two sounds will be easier to play than one that uses many different sounds
  • rhythmic difficulty — the more complex the rhythm, the more difficult the musical piece. The difficulty of a musical piece increases as the number of syncopations, triplets, pedal notes and similar rhythmic “variety” grows higher.
  • repetition — no matter how difficult a melody is, it is much easier to play if parts of it are repeated, as opposed to one that changes all the time. It is even worse in cases where the melody is repeated but in a slightly altered, “tricky” way (when the change of melody is easy to overlook).
  • difficulties related to musical notation — the more extra accidentals (flats, sharps, naturals), the more difficult a musical piece is
  • instrument-specific difficulties – some melodic flows can have radically different levels of difficulty on different instruments, e.g. two-note tunes on the piano or guitar are much easier to play than two-note tunes on the violin

Some tones are more difficult than others because they have more key marks to remember.

Technical aspects of the issue

Since we have outlined the musical side in the previous paragraphs, we will now focus on the technical side. To get into it properly, it is necessary to delve into the issue of “conditional probability”. Let us start with an example.

Suppose we do not know where we are, nor do we know today’s date. What is the likelihood of it snowing tomorrow? Probably quite small (in most places on Earth, it never or hardly ever snows) so we will estimate this likelihood at about 2%. However, we have just found out that we are in Lapland. This land is located just beyond the northern Arctic Circle. Bearing this in mind, what would the likelihood of it snowing tomorrow be now? Well, it would be much higher than it had been just now. Unfortunately, this information does not solve our conundrum since we do not know the current season. We will therefore set our probability at 10%. Another piece of information that we have received is that it is the middle of July — summer is in full swing. As such, we can put the probability of it snowing tomorrow at 0.1%.

Conditional probability

The above story allows us to easily draw a conclusion.  Probability depended on the state of our knowledge and could vary in both ways based on it. This is how conditional probabilities, which are denoted as follows, work in practice:


They inform us of how probable it is for an event to occur (in this case, A) if some other events have occurred (in this case, B). An “event” does not necessarily mean an occurrence or incident — it can be, as in our example, any condition or information.

To calculate conditional probabilities we must know how often event B occurs and how often events A and B occur at the same time. It will be easier to explain it by returning to our example. Assuming that A is snow falling and B is being in Lapland, the probability of snow falling in Lapland is equal to:

probability of snow in Lapland

The same equation, expressed more formally and using the accepted symbols A and B, would be as follows:

conditional probabilities formula

Note that this is not the same as the likelihood of it snowing in Lapland. Perhaps we visit Lapland more often in winter and it is very likely to snow when we are there?

Now, to calculate this probability exactly, we need two statistics:

  • NA∩B — how many times it snowed when we were in Lapland,
  • NB — how many times have we been to Lapland,

and how many days we have lived so far (or how many days have passed since we started keeping the above statistics):


We will use this data to calculate P(A∩B) and P(B) respectively:

Probability formulas

At last, we have what we expected:

probability formula

The probability of it snowing if we are in Lapland is equal to the ratio of how many times it snowed when we were in Lapland to how many times we were in Lapland. It is also worth adding that the more often we have been to Lapland, the more accurate this probability will be (if we have spent 1,000 days in Lapland, we will have a much better idea about it than if we have been there 3 times).


The next thing we need to know before taking up algorithmic music composition is N-grams, that is, how to create them and how to use them to generate probable data sequences. N-grams are statistical models. One N-gram is a sequence of elements of length equal to N. There are 1-grams, 2-grams, 3-grams, etc. Such models are often used in language modelling. They make it possible to determine how probable it is for a sequence of words to occur.

To do that, you take a language corpus (lots of books, newspapers, websites, forum posts, etc.) and count how many times a particular sequence of words occurs in it. For example, if the sequence “zamek królewski” [English: king’s castle] occurs 1,000 times in the corpus and the sequence “zamek błyskawiczny” [English: zip fastener]  occurs 10 times, this means that the first sequence is 100 times more likely than the second. Such information can prove useful to us. They allow us to determine how probable every sentence is.