Sight-playing – part 3

We already created the harmony of the piece in the previous article. What we need now is a good melody which will match this harmony. Melodies consist of motifs, i.e. small fragments of about 2-5 notes and their variations (transformations).

We will start by generating the first motif – its rhythm and sounds. As we did when generating the harmony, we will use N-gram statistics for musical pieces. Such statistics will be prepared using the Essen Folksong Collection base. You might as well use any other melody base, this choice will affect the type of melodies that will be generated. For each piece, we must isolate the melody, convert it into a sequence of rhythmic values and a sequence of sounds, and from these sequences extract the statistics. When compiling sound statistics, it is a good idea to first somehow prepare the melodies – transpose them all to two keys, e.g. C major and c minor. This will reduce the number of possible (probable) N-grams by 12 times and therefore the statistics will be better assessed.

A good motif

We will begin creating the first motif by generating its rhythm. Here, I would like to remind you that we have previously made a certain simplification – each motif and its variations will last exactly one bar. The subsequent steps for generating the rhythm of a motif: – we draw the first rhythmic value using unigrams, – we draw the next rhythmic value using bigrams and unigrams, – we continue to draw consecutive rhythmic values, using N-grams of increasingly higher level (up to 5-grams), – we stop until we reach a total rhythmic value equal to the length of one bar – if we have exceeded the length of 1 bar, we start the whole process from the beginning (such generation is fast enough that we can afford such a sub-optimal trial-and-error method).

The next step is to generate the sounds of the motif. Another simplification we made earlier is that we generate pieces only in C major key, so we will make use of the N-gram statistics created on the basis of pieces transposed to this key, excluding pieces in minor keys. The procedure is similar to that for generating rhythm: – we draw the first sound using unigrams, –we draw the next sound using bigrams and unigrams, – we continue until we have drawn as many sounds as we have previously drawn rhythmic values, – we check whether the motif matches the harmony, if not, we go back and start again – if after approx.

100 attempts we failed to generate a motif matching the harmony, this could mean that with the preset harmony and the preset motif there is a very low probability of drawing sounds that will match the harmony. In this case, we go back and generate a new motif rhythm.

Generate until you succeed

When generating both the motif rhythm and its sounds, we use the trial-and-error method. It will also be used in the generation of motif variations described below. Even if this method may seem “stupid”, it’s simple and it works. Although very often such randomly generated motifs don’t match the harmony, we can afford to make many such mistakes. Even 1000 attempts take a short time to calculate on today’s computers, and this is enough to find the right motif.

Variations with raepetitions

We have the first motif, and now need the rest of the melody. However, we will not continue to generate new motifs, as the piece would become chaotic. We also cannot keep repeating the same motif, as the piece would become too boring. A reasonable solution would be, in addition to repeating the motif, to create a modification of that motif, ensuring variation, but without making the piece chaotic.

There are many methods to create motif variations. One such method is chromatic transposition. It involves transposing all notes upward or downward by the same interval. This method can lead to a situation where a motif variation has sounds from outside the key of the piece. This, in turn, means that the probability that the variation will match the harmony is very low. Another method is diatonic transposition, whereby all notes are transposed by the same number of scale steps. Unlike the previous method, diatonic variations do not have off-key sounds.

Yet another method is to change a single interval; one of the motif intervals is changed, while all other intervals remain unchanged. That way, only one part of the motif (the beginning or the end) is transposed (via chromatic or diatonic transposition). Further methods are to convert two notes with the same rhythmic value to one or to convert one note to two notes with the same rhythmic value. For the first method, if the motif has two notes with the same rhythmic value, its rhythm can be changed by combining these two notes. For the second method, a note is selected at random and converted to two “shorter” notes.

Each of the described methods for creating variations makes it possible to generate different motifs. The listed methods are not the only valid methods; it is possible to come up with many more. The only restriction here is that the generated variation should not differ too much from the original motif. Otherwise, it would constitute a new motif rather than a variation. The border where the variation ends and a different motif begins is conventional in nature.

Etc., etc.

There are many more methods for generating motif variations; it is possible to come up with a lot of these. The only restriction is that the generated variation should not differ too much from the original motif. Otherwise, it would constitute a new motif rather than a variation. The border where the variation ends and another motif begins is rather conventional in nature and everyone “feels”, defines it a little differently.

Is that all?

That would be all when it comes to piece generation. Let us summarise the steps that we have taken:

  1. Generating piece harmony:
    • generating harmonic rhythm,
    • generating chord progression.
  2. Generating melody:
    • generating motif rhythm,
    • generating motif sounds,
    • creating motif variations,
    • creating motifs and variations “until it’s done”, that is, until they match the generated harmony

All that is left is to make sure that the generated pieces are of the given difficulty, i.e. matching the skills of the performer.

Controlling the difficulty

One of our assumptions was the ability to control the piece difficulty. This can be achieved via two approaches:

  1. generating pieces “one after another” and checking their difficulty levels (using the methods described earlier), thereby preparing a large database of pieces from which random pieces of the given difficulty will then be selected,
  2. controlling the parameters for creating the harmonies, motifs and variations in such a way that they generate musical elements of the given difficulty with increased frequency

Both methods are not mutually exclusive and thus can be successfully used together. First, a number of pieces (e.g. 1000) should be generated randomly, and then parameters should be controlled to generate further pieces (but only those which are missing). With respect to parameter control, it is worth noting that the probability of motif repetition can be changed. For pieces with low difficulty, the assigned probability will be higher (repetitions are easier to play). On the other hand, difficult pieces will be assigned lower probability and rarer harmonies (which will also force rarer motifs and variations).

Sight-playing part – 2

In the first part of the article, we have learned about many musical and technical concepts. Now it is time to use them to build an automatic composer.  Before doing so, however, we must make certain assumptions (or rather simplifications):

  • the pieces will consist of 8 bars in periodic structure (antecedent 4 bars, consequent 4 bars)
  • the metre will be 4/4 (i.e. 4 quarter notes to each bar, accent on the first and third measures of the bar)
  • the length of each motif is 1 bar (although this requirement appears restrictive, many popular pieces are built precisely from motifs that last 1 bar).
  • only C major key will be used (if necessary, we can always transpose the piece to any key after its generated)
  • we will limit ourselves to about 25 most common varieties of harmonic degrees (there are 7 degrees, but some of them have several versions, with additional sounds which change the chord colour).

What is needed to create a musical piece?

In order to automatically create a simple musical piece, we need to:

  • generate the harmony of a piece – chords and their rhythm
  • create motifs – their sounds (pitches) and rhythm
  • create variations of these motifs – as above
  • combinate the motifs and variations into a melody, matching them with the harmony

Having mastered the basics, we can move on to the first part of automatic composing – generating a harmony. Let’s start by creating a rhythm of the harmony.

Slow rhythm

Although one might be tempted to create a statistical model of the harmonic rhythm, unfortunately, (at least at the time of writing this article) there is no available base which would make this possible. Given the above, we must handle this in a different way – let’s come up with such a model ourselves. For this purpose, let’s choose a few “sensible” harmonic rhythms and give them some “sensible” probability.

rhythmprobabilityrhythmprobability
[8]0.2[2,2]0.1
[6, 2]0.1[2,1,1]0.02
[2, 6]0.1[3,1]0.02
[7, 1]0.02[1,1,1,1]0.02
[4]0.4[1,1,2]0.02
Table 1. Harmonic rhythms, values expressed in quarter notes – [6, 2] denotes a rhythm in which there are two chords, the first one lasts 6 quarter notes, the second 2 quarter notes.

The rhythms in the table are presented in terms of chord duration, and the duration is shown in the number of quarter notes. Some rhythms last two bars (e.g. [8], [6, 2]), and others one bar ([4], [1, 1, 2] etc.).

Generating a rhythm of the harmony proceeds as follows. We draw new rhythms until we have as many bars as we needed (8 in our case). Sometimes certain complications may arise from the fact that the rhythms have different lengths. For example, there may be a situation where to complete the generation we need the last rhythm that lasts 4 quarter notes, but we draw one that lasts 8 quarter notes. In this case, in order to avoid unnecessary problems, we can force drawing from a subset of 4-quarter-note rhythms.

Then, in line with the above findings, let’s suppose that we drew the following rhythms:

  • antecedent: [4, 4], [2, 2], [3, 1], 
  • consequent: [3, 1], [8], [2, 2]

Likelihood

In the next step, we will be using the concept of likelihood. It is a probability not normalised to one (so-called pseudo-probability), which helps to assess the relative probability level of different events. For example, if the likelihoods of events A and B are 10 and 20 respectively, this means that event B is twice as likely as event A. These likelihoods might as well be 1 and 2 or 0.005 and 0.01. From the likelihoods, probability can be calculated. If we assume that only events A and B can occur, then their probability will be respectively:

Chord progressions

In order to generate probable harmonic flows, we will first prepare the N-gram models of harmonic degrees. To this end, we will use N-gram models available on github (https://github.com/DataStrategist/Musical-chord-progressions).

In our example, we will use 1-, 2-, 3-, 4- and 5-grams.

In the rhythm of the antecedent’s harmony, there are 6 rhythmic values, so we need to prepare the flow of 6 harmonic degrees. We generate the first chord using unigrams (1-grams). Now, we first prepare the likelihoods for each possible degree and then draw while taking these likelihoods into consideration. The formula for likelihood is quite simple in this case

likelihoodX=p(X)

where

  • X means any harmonic degree
  • p(X) is the probability of the 1-gram of X

In this case, we drew IV degree (in this key of F major).

We generate the second chord using bigrams and unigrams, with a greater weight for bigrams.

likelihoodX=weight2gramp(X v IV)+weight1gram*p(X)

where:

  • p(X v IV) is the probability of the flow (IV, X)
  • weightNgram is the adopted N-gram weight (the greater the weight, the greater the impact of this N-gram model, and the smaller the impact of other models)

We can adopt N-gram weights as we wish. For this example, we chose the following:

N-gram12345
weight0.0010.010.115

The next chord we drew was: vi degree (a minor).

The generation of the third chord is similar, except that we can now use 3-grams:

likelihoodX=weight3gramp(X v IV, vi)+weight2gramp(X v IV)+weight1gram*p(X)

And so we continue until we have generated all the necessary chords. In our case, we drew:

IV, vi, I, iii, IV, vi (in the adopted key of C major these are, respectively, F major, a minor, C major, e minor, F major and a minor chords).

This is not a very common chord progression but, as it turns out, it occurs in 5 popular songs (https://www.hooktheory.com/trends#node=4.6.1.3.4.6&key=rel)

Summary

We were able to generate the rhythms and chords which are the components of the harmony of a piece. However, it should still be noted here that, for the sake of simplicity, we didn’t take into account two important factors:

  • The harmonic flows of the antecedent and consequent are very often linked in some way. The harmony of the consequent may be identical with that of the antecedent or perhaps slightly altered to create the impression that these two sentences are somehow linked.
  • The antecedent and consequent almost always end on specific harmonic degrees. This is not a strict rule, but some harmonic degrees are far more likely than others at the end of musical sentences.

For the purposes of the example, however, the task can be deemed completed. The harmony of the piece is ready, now we only need to create a melody to this harmony. In the next part of our article, you will find out how to compose such a melody.

New developments in desktop computers

Today’s technology market is thriving with desktop computers. Technology companies are trying to differentiate their products by incorporating innovative features into them. Recently, the Mac M1 Ultra has received a lot of recognition.

The new computer from Apple, stands out above all for its size and portability. Unveiled at the beginning of March, the product is a full-fledged desktop enclosed in a case measuring 197 x 197 x 95 mm. Comparing this to Nvidia’s RTX series graphics cards, for instance the Gigabyte GeForce RTX 3090 Ti 24GB GDDR6X, where the GPU alone measures 331 x 150 x 70 mm, it appears that one gets a whole computer the size of a graphics card. [4]

Apple M1 Ultra  - front panel
Fig. 1 – Apple M1 Ultra  – front panel [5]

Difference in construction

Cores are the physical parts of a CPU where processes and calculations take place; the more cores the faster the computer will run. The technological process expressed in nm represents the gate size of the transistor and translates into the power requirements and heat generated by the CPU. So the smaller the value expressed in nm, the more efficient the CPU.

The M1 Ultra CPU has 20 cores and the same number of threads, and is made with 5nm technology. [4][6] In comparison, AMD offers a maximum of 16 cores and 32 threads in 7nm technology [7] (AMD’s new ZEN4 series CPUs are expected to have 5nm technology, but we do not know the exact specifications at this point [3]) and Intel 16 cores and 32 threads in 14nm technology [8]. In view of the above, in theory, the Apple product has a significant advantage over the competition in terms of single thread performance. [Fig. 2]

Performance of the new Apple computer

According to the manufacturer’s claims, the GPU from Apple was supposed to outperform the best graphics card available at the time, the RTX 3090.

Graph of the CPU performance against the amount of power consumed
Fig. 2 – Graph of the CPU performance against the amount of power consumed [9]. Graph shown by Apple during the presentation of a new product

The integrated graphics card was supposed to deliver better performance while consuming over 200W less power. [Fig. 3] After the release, however, users quickly checked the manufacturer’s assurances and found that the RTX significantly outperformed Apple’s product in benchmark tests.

Graph of graphics card performance against the amount of power consumed
Fig. 3 – Graph of graphics card performance against the amount of power consumed [9]. Graph shown by Apple during the presentation of a new product. Compared to RTX 3090

The problem is that these benchmarks mostly use software not optimised for the Mac OS, so that the Apple product does not use all of its power. In tests that use the full GPU power, the M1 Ultra performs very similarly to its dedicated rival. Unfortunately, not all applications are written for Apple’s OS, severely limiting the applications in which we will use the full power of the computer.[10]

The graph below shows a comparison of the frame rate in “Shadow of the Tomb Raider” from 2018. [Fig. 4] The more frames, the smoother the image.  [2]

The frame rate of the Tomb Raider series game (the more the better)
Fig. 4 – The frame rate of the Tomb Raider series game (the more the better) [2].

Power consumption of the new Mac Studio M1 Ultra compared to standard PCs

Despite its high computer performance, Apple’s new product is very energy-efficient. The manufacturer states that its maximum continuous power consumption is 370W. Standard PCs with modern components do not go below 500W and the recommended power for hardware with the best parts is 1000W [Table 1] ( Nvidia GeForce RTX 3090 Ti + AMD R7/9 or Intel i7/9 ).  

Intel i5
AMD R5
Intel i7
AMD R7
Intel i9 K
AMD R9
NVIDIA RTX 3090 Ti850W1000W1000W
NVIDIA RTX 3090 750W850W850W
NVIDIA RTX 3080 Ti750W850W850W
NVIDIA RTX 3080 750W850W850W
NVIDIA RTX 3070 Ti750W850W850W
NVIDIA RTX 3070 650W750W750W
Lower graphic cards650W650W650W
Table 1 – Table of recommended PSU wattage depending on the CPU and graphics card used. AMD and Intel CPUs in the columns, NVIDIA RTX series graphics cards in the rows. [1]

This means significantly lower maintenance costs for such a computer. Assuming that our computer works 8 hours a day and an average kWh cost of PLN 0.77, we obtain a saving of PLN 1,500 a year. In countries that are not powered by green energy, this also means less pollution.

Apple’s product problems

Products from Apple have dedicated software, which means better compatibility with the hardware and translates into better performance, but it also means that a lot of software not written for Mac OS cannot fully exploit the potential of the M1 Ultra. The product does not allow the use of two operating systems or the independent installation of Windows/Linux. So it turns out that what allows the M1 Ultra to perform so well in some conditions is also the reason why it is unable to compete in performance in other programs. [10]

Conclusion

The Apple M1 Ultra is a powerful computer in a small box. Its 5nm technology provides the best energy efficiency among products currently available on the market. However, due to its low compatibility and high price, it will not replace standard computers. To get maximum performance, dedicated software for the Apple operating system is required. When deciding on this computer, one must keep this in mind. For this reason, despite its many advantages, it is more of a product for professional graphic designers, musicians or video editors.

References

[1] https://www.msi.com/blog/we-suggest-80-plus-gold-1000w-and-above-psus-for-nvidia-geforce-rtx-3090-Ti

[2] https://nano.komputronik.pl/n/apple-m1-ultra/

[3] https://www.tomshardware.com/news/amd-zen-4-ryzen-7000-release-date-specifications-pricing-benchmarks-all-we-know-specs

[4] https://www.x-kom.pl/p/730594-nettop-mini-pc-apple-mac-studio-m1-ultra-128gb-2tb-mac-os.html

[5] https://dailyweb.pl/apple-prezentuje-kosmicznie-wydajny-mac-studio-z-nowym-procesorem-m1-ultra/

[6] https://geex.x-kom.pl/wiadomosci/apple-m1-ultra-specyfikacja-wydajnosc/

[7] https://www.amd.com/pl/partner/ryzen-5000-series-desktop

[8] https://www.cpu-monkey.com/en/

[9] https://www.apple.com/pl/newsroom/2022/03/apple-unveils-m1-ultra-the-worlds-most-powerful-chip-for-a-personal-computer/

[10] https://youtu.be/kVZKWjlquAU?t=301

Cloud computing vs environment

The term “cloud computing” is difficult to define in a clear manner. Companies will approach the cloud differently than individuals. Typically, “cloud computing” is used to mean a network of server resources available on demand – computing power and disk space, but also software – provided by external entities, i.e. the so-called cloud providers. The provided resources are accessible via the Internet and managed by the provider, which eliminates the need for companies to purchase hardware and directly manage physical servers. In addition, the cloud is distributed over multiple data centres located in many different regions of the world, which means that users can count on lower failure rates and higher availability of their services [1].

The basic operation of the cloud

Resources available in the cloud are shared by multiple clients, which makes it possible to make better use of the available computing power and, if utilised properly, can prove to be more cost-effective. Such an approach to resource sharing may raise some concerns, but thanks to virtualisation, the cloud provides higher security than the traditional computing model. Virtualisation makes it possible to create simulated computers, so-called virtual machines, which behave like their physical counterparts, but reside on a single server and are completely isolated from each other. Resource sharing and virtualisation allow for efficient use of hardware and ultimately reduce power consumption by server rooms. Financial savings can be felt thanks to the “pay-as-you-go” business model commonly used by providers, which means that users are billed for actually used resources (e.g. minutes or even seconds of used computing time), as opposed to paying a fixed fee. 

The term “cloud” itself originated as a slang term. In technical diagrams, network and server infrastructure is often represented by a cloud icon [2]. Currently, “cloud computing” is a generally accepted term in IT and a popular computing model. The affordability of the cloud and the fact that users are not required to manage it themselves mean that this computing model is being increasingly preferred by IT companies, which has a positive impact on environmental aspects [3].

Lower power consumption 

The increasing demand for IT solutions leads to increased demand for electricity – a strategic resource in terms of maintaining the cloud. A company maintaining its own server room leads to significant energy expenditure, generated not only by the computer hardware itself but also by the server room cooling system. 

Although it may seem otherwise, larger server rooms which process huge amounts of data at once are more environmentally friendly than local server rooms operated by companies [4]. According to a study carried out by Accenture, migrating a company to the cloud can reduce power consumption by as much as 65%. This stems from the fact that cloud solutions on the largest scale are typically built at dedicated sites, which improves infrastructure organisation and resource management [5]. Providers of large-scale cloud services can design the most effective cooling system in advance. In addition, they make use of modern hardware, which is often much more energy-efficient than the hardware used in an average server room. A study conducted in 2019 revealed that the AWS cloud was 3.6 times more efficient in terms of energy consumption than the median of the surveyed data centres operated by companies in the USA [6].

Moreover, as the cloud is a shared environment, performance can be effectively controlled. The scale of the number of users of a single computing cloud allows for a more prudent distribution of consumed energy between individual cases. Sustainable resource management is also enabled by our Data Engineering product, which collects and analyses data in order to maximise operational efficiency and effectiveness.

Reduction of emissions of harmful substances

Building data processing centres which make use of green energy sources and are based on low-emission solutions makes it possible, among others, to control emissions of carbon dioxide and other gases which contribute to the greenhouse effect. According to data presented in the “The Green Behind Cloud” report [7], migrating to public cloud can reduce global carbon dioxide emissions by 59 million tonnes per year, which is equivalent to removal of 22 million cars from the roads.

It is also worth considering migration to providers which are mindful of their carbon footprint. For example, the cloud operated by Google is fully carbon-neutral through the use of renewable energy, and the company promises to use only zero-emission energy around the clock in all data centres by 2030 [8]. The Azure cloud operated by Microsoft has been carbon-neutral since 2012, and its customers can track the emissions generated by their services using a special calculator [9].

Reduction of noise related to the use of IT hardware  

Noise is classified as environmental pollution. Though at first glance it may appear quite inconspicuous and harmless, it has a negative impact on human health and the quality of the environment. With respect to humans, it increases the risk of such diseases as cancer, myocardial infarction and arterial hypertension. With respect to the environment, it leads to changes in animal behaviour and affects bird migration and reproduction.

The main source of noise in solutions for storing data on company servers is a special cooling system which maintains the appropriate temperature in the server room. Using cloud solutions makes it possible to reduce the noise emitted by cooling devices at workplaces, which helps limit environmental noise pollution.

If you want to learn more about the available solutions for reducing industrial noise, check our Intelligent Acoustics product.

Waste level reduction 

Making use of cloud computing in business activities, as opposed to having traditional servers as part of company resources, also helps reduce the amount of generated electronic waste. This stems primarily from the fact that cloud computing does not necessitate the purchase of additional equipment or preparation of infrastructure for a server room at the company, which reduces the amount of equipment that needs to be disposed of in the long term.  

In addition, the employed virtualisation mechanisms, which entail the replacement of a larger number of low-performance servers with a smaller number of high-performance servers which are able to use this performance more effectively, optimise and increase server efficiency, and thus reduce the demand for hardware resources.  

Summary 

Sustainability is currently an important factor in determining the choice of technology. Environmental protection is becoming a priority for companies and for manufacturers of network and telecommunications devices, which means that greener solutions are being sought. Cloud computing definitely fits this trend. It not only limits the consumption of hardware and energy resources, but also reduces the emission of harmful substances into the ecosystem as well as noise emissions into the environment.  

References 

[1] https://www.wit.edu.pl/dokumenty/wydawnictwa_naukowe/zeszyty_naukowe_WITZ_06/0006_Joszczuk-Januszewska.pdf 

[2] https://rocznikikae.sgh.waw.pl/p/roczniki_kae_z36_21.pdf 

[3] http://yadda.icm.edu.pl/yadda/element/bwmeta1.element.ekon-element-000171363539  

[4] Paula Bajdor, Damian Dziembek “Środowiskowe i społeczne efekty zastosowania chmury obliczeniowej w przedsiębiorstwach” [“Environmental and Social Effects of the Use of Cloud Computing in Companies”], 2018 

[5] https://www.accenture.com/_acnmedia/PDF-135/Accenture-Strategy-Green-Behind-Cloud-POV.pdf  

[6] “Reducing carbon by moving to AWS” https://www.aboutamazon.com/news/sustainability/reducing-carbon-by-moving-to-aws

[7] https://www.accenture.com/us-en/insights/strategy/green-behind-cloud

[8] “Operating on 24/7 Carbon-Free Energy by 2030.” https://sustainability.google/progress/energy/

[9] https://www.microsoft.com/en-us/sustainability/emissions-impact-dashboard