Artificial Intelligence — a tool for breaking human records

Go, originating from China with a history spanning over 2500 years, and chess, originating from India and dating back approximately 1500 years, are the most popular strategic board games in the world. Significant not only in terms of entertainment but also culturally. The rules are clear and precise, making the entry threshold for new players for both of these games very low. It is this simplicity that gives rise to the invention of original solutions. It also about tactics and, above all, enormous human intellectual effort. All that to defeat the opponent. 

Artificial Intelligence is not only the subject of serious applications such as Intelligent Acoustics in industry, Artificial Intelligence Adaptation in development research or Data Engineering. These and other algorithms are also used in various fields of entertainment. They are used to create models, artificial players to beat human players in board games and even in e-sports. 

At the turn of the 20th and 21st centuries, chess and Go lived to see their digital versions. Computer games also emerged, with players vying for first place on the board and e-sports titles. In parallel with these, several artificial intelligence models with appropriately implemented rules have emerged to search for better plays and beat human players. In this post, I am going to describe how board games, computer games and artificial intelligence complement and inspire each other. I am also going to show how a properly trained artificial intelligence model has defeated not only individual modern grandmasters, but also entire teams. 

Artificial intelligence conquers board games

How artificial intelligence defeated a chess grandmaster has its roots in the Deep Blue project led by IBM. The main goal of the project was to create a computerised chess system. Deep Blue was the result of years of work by scientists and engineers. The first version of Deep Blue was developed in the 1980s. It used advanced algorithms, i.e.: 

  • Tree Search based on a database of chess moves and positions, 
  • Position Evaluation
  • Depth Search

In 1996, the first match between Deep Blue and Garri Kasparov took place. This match was experimental and was the first official meeting of its kind. Kasparov won three games, drawing and losing one. In May 1997, they clashed again in New York. This time, Garii Kasparov fell in a duel with artificial intelligence. Deep Blue won twice and lost only once. A draw was declared three times. 

Fig. 1 Garii Kasparov during a game against Deep Blue in May 1997. 


Less is more

An equally interesting case is a programme created by DeepMind called AlphaGo. This artificial intelligence was designed to play Go, as the world found out when it beat Go grandmaster Lee Sedol. Go is much more difficult than other games, including chess. This is due to the much larger number of possible moves. It makes it difficult to use traditional AI methods such as exhaustive search [1, 2]. DeepMind started work on the AlphaGo programme in 2014. The aim was to create an algorithm that could compete with the masters. It used advanced machine learning techniques:

  • Deep Learning
  • Reinforcement Learning (RL), 
  • Monte Carlo Tree Search

AlphaGo’s first significant achievement was beating European competitor Fan Hui in October 2015. The engine from DeepMind completely dominated each game, thus winning five to zero [3]. The next step was to defeat grandmaster Lee Sedol. During the matches, artificial intelligence surprised not only its opponent but also experts with its unconventional and creative moves. The programme demonstrated its ability to anticipate strategies and adapt to changing conditions on the board. As a result, after games played from 9-15 March 2016, AlphaGo claimed a historic victory over Lee Sedol, winning the five-match series 4-1. 

Competition on digital boards 

In 2018, OpenAI created a team of artificial players, the so-called bots, dubbed the OpenAI Five. The bot team faced professional players in Dota 2, one of the most complex MOBA (Multiplayer Online Battle Arena) games. Two teams of five players battle against each other to destroy the opponent’s base. Several advanced machine learning techniques and concepts were used to ‘train’ OpenAI Five:   

  • Reinforcement Learning – bots learned to make decisions by interacting with the environment and receiving rewards for certain actions, 
  • Proximal Policy Optimisation (PPO) – this is a specific RL technique that, according to the developers, was crucial to its success [5]. This method optimises the so-called policy (i.e. decision-making strategy) in a way that is more stable and less prone to oscillations compared to earlier methods such as Trust Region Policy Optimisation (TRPO) [6]. 
  • Spontaneous learning – artificial players played millions of games against each other. This allowed them to develop increasingly sophisticated strategies, learning from their mistakes and successes. 

In August 2018, artificial intelligence beat the semi-professional Pain Gaming team at the annual world championship ‘The International’. In 2019, at the OpenAI Five Finals event, the bots defeated a team made up of top players. It included members of the OG team, winners of The International in 2018. DeepMind, on the other hand, decided not to stop with AlphaGo and turned its focus towards StarCraft II, one of the most popular real-time strategy (RTS) games, by creating the AlphaStar programme. AI went into one-on-one duels with professional StarCraft II players in 2019. In January, it defeated the strategy’s top players — Gregory “MaNa” Komincz twice — and also won over Dario “TLO” Wünsch. AlphaStar thus proved its capabilities. 

Artificial Intelligence in e-sports

Artificial intelligence is playing an increasingly important role in the training of professional e-sports teams. Especially in countries such as South Korea, where the League of Legends is one of the most popular games. Here are some key areas where AI is being used for training in professional organisations such as T1, and Gen.G

Analytics teams use huge amounts of collected data from league and friendly matches. They analyse match statistics such as number of assists, gold won, most frequently taken paths and other key indicators. This allows coaches to identify patterns and weaknesses in both their players and opponents. 

Advanced training tools using artificial intelligence, such as ‘AIM Lab’ or ‘KovaaK’s’, help players develop specific skills. Such tools can personalise training programmes that focus on improving reactions, aiming, tactical decisions and other key aspects of the game. 

They are also used to create advanced simulations and game scenarios while mimicking various situations that may occur during a match. This allows players to train under conditions closely resembling real-life scenarios. This allows players to better prepare for unexpected events and make better decisions faster during actual matches. 

AI algorithms can be used to optimise team composition by analysing data on individual player skills and preferences. The results of such studies can suggest which players should play in which positions. They can also help select line-ups to maximise team effectiveness. 


This article shows how artificial intelligence has dominated board games and made a permanent presence in e-sports. It has defeated human champions in chess, Go, Dota 2 and StarCraft II. The successes of projects such as Deep Blue, AlphaGo, OpenAI Five and AlphaStar show the potential of AI in creating advanced strategies and improving gaming techniques. Future development opportunities include its use in creating more realistic scenarios, developing detailed and personalised player development paths, and predictive analytics that can revolutionise training and strategy across industries. 


[1] Google achieves AI ‘breakthrough’ by beating Go champion, “BBC News”, 27 January 2016 

[2] AlphaGo: Mastering the ancient game of Go with Machine Learning, “Research Blog” 

[3] David Larousserie et Morgane Tual, Première défaite d’un professionnel du go contre une intelligence artificielle, “Le”, 27 January 2016, ISSN 1950-6244 

[4] accessed 13 June 2024 

[5] accessed 13 June 2024 

[6] Schulman, J., Wolski, F., Dhariwal, P., Radford, A., & Klimov, O. (2017). Proximal policy optimization algorithms. arXiv preprint arXiv:1707.06347

Uncanny valley 

The uncanny valley is a term used to refer to the familiar, disturbing impression people have when a robot resembles a human being very closely but is not convincingly realistic [1]. The phenomenon first emerged in the 1970s. Japanese roboticist Masahiro Mori observed that robots became more interesting the more they resembled humans in appearance. However, this tendency holds only up to a certain point. He then described this phenomenon as bukimi no tani (English: uncanny valley). After ‘reaching’ bukimi no tani, interest turns into alienation, anxiety or even fear [2].

Fig. 1. Diagram illustrating the uncanny valley phenomenon. 


Why do we experience the uncanny valley?

We have yet to find one concrete answer to this question. However, several theories help us better understand why it occurs. These reasons are divided as follows:

  • Neurological

In a 2019 study, Fabian Grabenhorst and a team of neuroscientists analysed the neurological aspect of the uncanny valley. They investigated brain patterns in 21 people using functional magnetic resonance imaging (fMRI), a technique that measures changes in blood flow in different brain areas. During the tests, participants determined their confidence level towards humans and robots with varying levels of human similarity. The results showed that some specific parts of the brain were particularly important for the uncanny valley. Two parts of the medial prefrontal cortex, responsible for attention and senses, showed unusual activity. One of them transformed the ‘human resemblance signal’ into a ‘human detection signal’ and overemphasised the boundary between human and non-human. On the other hand, the other correlated this signal with a sympathy rating. This combination formed a mechanism that closely resembles the uncanny valley phenomenon.

  • Psychological

It turns out that as early as 1919, Sigmund Freud observed a phenomenon he described as ‘a strange emotion felt by people which is aroused by certain objects’. He suggested that the feeling we then experience may be related to doubts about whether something inanimate has a ‘soul’. Interestingly, at the time, his observation obviously referred not to robots but realistic dolls or wax figures. He suggested that the phenomenon may be older than we think and pertain to more things than just machines. Today, the film industry uses a similar mechanism. Many horror films give human characteristics to characters that are not human.

  • Evolutionary

The uncanny valley can also be linked to evolution. The robots we classify in the uncanny valley look like humans but also have features that are clearly not human. Some of these features, such as lifeless skin, unnatural facial features or a voice that does not match their appearance, can make us associate them with something outside the norm or even dangerous. This, in turn, creates aversion or fear in us. When we are confronted with something that is human, but unrealistic, not ‘like a living thing’, it evokes a feeling similar to the one we experience when we come into contact with something that is dead.

  • Cognitive

The uncanny valley may also stem from an existential fear of robots replacing humans. The sight of a robot that resembles a human in appearance but is not human disrupts our expectations of what a human looks like versus what a robot looks like. It raises doubts about who humans are, what they should look like, and how they should behave. It is worth noting that the anxiety does not stem from the mere existence of robots but from the existence of such robots that combine elements that do not usually occur together. For example, robots that ‘sound like robots’ are not a problem for us, while robots with a human voice are [2, 3].

The uncanny valley in reality

The uncanny valley is present in many different areas. Outside robotics, it can also be observed in computer games or films that use computer-generated imagery (CGI). This effect goes beyond technology and can be caused by objects such as realistic dolls, mannequins or wax figures.

  • Sophia

Photo 1. Photo of the Sophia robot. 


Sophia is the most advanced humanoid robot yet developed. Created by Hanson Robotics, it was first activated in 2016. Sophia was granted citizenship of Saudi Arabia, thus becoming the world’s first robotic citizen. The robot was awarded the title of Innovation Champion of the United Nations Development Programme. Sophia has also gained recognition through appearances on TV programmes such as Good Morning Britain and The Tonight Show [4]. Sophia can express various complex emotions, assume human facial expressions and interact with others. The robot is equipped with the ability to process and use natural language, facial recognition and visual tracking [5]. Sophia’s ‘skin’ is made of a special material developed by researchers at Hanson Robotics, which has been named Frubber. It is a type of rubber that resembles human skin’s texture and elasticity [6]. Because of its appearance and behaviour, which are very close to those corresponding to humans, it is still too unnatural. Sophia is the case of the uncanny valley and can thus arouse discomfort and anxiety.

  • The Polar Express

Fig. 2. Computer-generated shot from The Polar Express


The Polar Express is a 2004 animated film directed by Robert Zemeckis. This film was made using CGI, which many believe was misused. The producers of the film adaptation themselves had conflicting visions of how the film should be made. In an interview with Wired, Robert Zemeckis said that ‘live action would look awful, and it would be impossible – it would cost $1 billion instead of $160 million.’ In contrast, Tom Hanks, who played seven characters in the film, argued that the film should not have been made as animation [7]. The filmmakers found a kind of consensus by combining the two approaches. They used motion capture, a method of recording actors’ movements and then transferring them to a computer. However, critics argue that the filmmakers failed to represent the characters well, making them seem insufficiently realistic. The characters lack human emotions and facial expressions; they move unnaturally, and their gaze seems constantly ‘absent’.

Consequences of the uncanny valley

The uncanny valley significantly impacts the future of many different areas of our lives. With the existing knowledge of the unwanted feelings it can cause, roboticists, filmmakers and video game designers can factor this problem into their work. It is clear now that there is value in developing robots that do not create mistrust between the machine and the user. Otherwise, they will be exposed to poor reception and less usefulness in achieving their intended purpose.

In films, on the other hand, overly realistic computer-generated characters can, at best, elicit a lack of sympathy from the viewer and, at worst, feelings such as anxiety or even fear. This is why filmmakers often overemphasise certain characters’ physical characteristics. Giving characters distinctive traits such as outsized eyes, unnatural skin colour, or overly dynamic movements is one way of dealing with avoiding the effect caused by the uncanny valley. Similar mechanisms are used in computer games; designers may want to create characters that are not overly realistic to avoid an unfavourable reception from players. However, there are also exceptions; in some cases, filmmakers or game designers may want to get characters that deliberately fit into the uncanny valley. In this way, they can control, for example, how villains will be perceived. A protagonist who exhibits some unnatural and overly realistic characteristics will create a sense of resentment among the audience [8, 9].

The uncanny valley and UX

A very interesting issue in the uncanny valley is its impact on user interface design. Adding certain realistic elements to the interface design can have positive effects. For example, light and shadow lend a sense of being able to press an item, and sound can provide a counterpart to a particular sound that we would also hear in real life. However, adding too much realism can lead to too thin a line between the virtual and the real. For example, a highly detailed calendar application whose texture resembles natural paper. The fact that we cannot touch it but only ‘scroll’ through it on a computer or smartphone screen can give us the impression of something strange, ‘not right’. This is why it is so important not to strive for elements that completely mirror real objects. By striking the right balance between realism and fiction, the user experience becomes enjoyable and dilemma-free [10].

Fig. 3. A very realistic Google Chrome logo from 2008 and its upgraded, much less realistic version from 2011. 



People experience anxiety when encountering almost realistic-looking but still insufficiently realistic human-like entities; this phenomenon is called the uncanny valley. It is critical in various areas. Some examples include advanced robots, computer-generated characters or even forms beyond the realm of technology, such as dolls or wax figures. The implications of the uncanny valley can significantly affect the acceptance and usability of technology. In the context of UX, awareness of the uncanny valley is crucial for designers who seek to minimise undesirable effects by designing interfaces appropriately so that users feel comfortable and engaged in their interactions with products.












Society 5.0

The idea behind Society 5.0 is to create a super-intelligent society in which various social challenges are solved by implementing innovations of the fourth industrial revolution — such as IoT, Big Data, Artificial Intelligence (AI), robotics, or the sharing economy — into every industry and social life. In such a world, people, machines and their environment are interconnected and able to communicate with each other [1]. In practice, Society 5.0 will, among other things, seek to provide better care for seniors — in Japan, the population is ageing rapidly, and if there were ever to be a shortage of hands to care for the elderly in the future, it is the new quality of computing that will be able to raise the standard of healthcare for retirees [2]. Society 5.0 is a term that refers to a new society in which technological developments are human-centred and seek valuable solutions for the lives of people around the world.

Solutions for Better Human Life

Fig. 1. Illustration of Japan’s social transformation plan — Society 5.0. 

[Accessed: 7 March 2024]. 

History of the Development of Society

Society 5.0 is the result of nothing more than an evolution spanning five stages of social development: 

  • Society 1.0: Gatherer-hunter society (the way of life of the first humans, which lasted until about 12,000 years ago) — a society that based its lifestyle on hunting for animals and searching for wild vegetation and other types of nutrients [3]. 
  • Society 2.0: Agricultural society (first appears around 10,000–8,000 years ago) — a society that focuses its economy primarily on agriculture and the cultivation of large fields [4]. 
  • Society 3.0: Industrial society (from the late 18th century onwards) — a society in which the dominant way of organising life is through mass production technologies, used to produce immense quantities of goods in factories [5]. 
  • Society 4.0: Information society (since the second half of the 20th century) — a society in which the creation, dissemination, use, integration and management of information is an essential aspect of economic, political or cultural activities [6]. 

Technological Integration for a Better Quality of Life

The concept of collecting data from the world around us, processing it by computers and putting it to practical use is not new in today’s world. The operation of air conditioners, for example, is based on exactly this principle. They regularly measure the temperature in a room and then compare the reading with a pre-programmed temperature. Depending on whether the measured temperature is higher or lower than the one originally set, the device pauses or starts the airflow. This mechanism uses automated computer systems. The term ‘information society’ (Society 4.0) therefore refers to a society in which each such system acquires data, processes it and then uses it in its own specified environment.

Now, knowing exactly what the idea of Society 4.0 is, we can understand what distinguishes it from Society 5.0. The fundamental difference is that Society 5.0, instead of using systems that operate in a defined, limited way, will use systems that operate in an integrated way, affecting the life of society as a whole. Data will be processed by advanced information systems, such as Artificial Intelligence, as these systems are adapted to process such large amounts of data. The main purpose of using the collected data will be to ensure everyone’s happiness and comfort [7]. At BFirst.Tech, we also see these needs and respond to them with specific tools. Our areas — Data Engineering and Data Architecture & Management use innovative technological solutions to collect, analyse and manage data to support efficient and sustainable process management. This type of management has a significant impact on security, data reliability and strategic decision-making, which contributes to the prosperity of society.

The New Era of Prosperity and the Challenges It Faces

Society 5.0 aims to use state-of-the-art technology in such a way as to ensure the well-being of all people. The idea is that technological development can be a tool to address social inequalities, improve quality of life and create a more sustainable community. The main objectives it envisages are:

  • reducing social inequalities, 
  • speeding up medical services and increasing the precision of medical procedures and operations, 
  • increasing food production while reducing waste 
  • improving public safety 
  • solving problems caused by natural disasters, 
  • promoting public participation in the development of ideas and projects, 
  • ensuring transparent access to data and ensuring information security. 

Society 5.0 aims to create a harmonious balance between technological development and societal needs, but this brings its own challenges. One of the most crucial conditions for this vision’s successful implementation is the commitment and leadership of governments. This is because governments are responsible for aspects such as funding, the implementation of technology in public life or the creation of new security-related legislation. Cybersecurity risks are another significant challenge. It is important to bear in mind that the actions of hackers, or issues related to data theft, can effectively hinder the development of innovation, so it is crucial to ensure a sound level of data protection [8].

The United Nations Sustainable Development Goals

Society 5.0 and the United Nations Sustainable Development Goals are two separate initiatives that are moving in a very similar direction. Indeed, these two innovative approaches share one common goal — to eliminate social problems sustainably. It can be said that Society 5.0 will, in a way, realise the Sustainable Development Goals, through specific actions. These actions, matched with specific goals, are:

  • aiming for more accurate and efficient diagnosis of diseases through the use of advanced technologies (such as Big Data and Artificial Intelligence),
Illustration of UN Sustainable Development Goal 3.

Fig. 2. Illustration of UN Sustainable Development Goal 3. 


  • disseminating e-learning and making education more accessible,
Illustration of UN Sustainable Development Goal 4.

Fig. 3. Illustration of UN Sustainable Development Goal 4. 


  • creation of new jobs related to fields such as robotics, Artificial Intelligence or data analytics,
Illustration of UN Sustainable Development Goal 8.

Fig. 4. Illustration of UN Sustainable Development Goal 8. 


  • promoting innovation and investing in new infrastructure (such as smart networks or high-speed internet),
Illustration of UN Sustainable Development Goal 9.

Fig. 5. Illustration of UN Sustainable Development Goal 9. 


  • creating smart cities that use sensors and data analysis to optimise traffic flow, reduce energy consumption and improve safety, 
Illustration of UN Sustainable Development Goal 11.

Fig. 6. Illustration of UN Sustainable Development Goal 11. 


  • Reducing greenhouse gas emissions and promoting sustainable transport.
Illustration of UN Sustainable Development Goal 13.

Fig. 7. Illustration of UN Sustainable Development Goal 13.


Common Direction

It is crucial that the benefits of Society 5.0 are equally available to everyone, so that everyone has the same opportunity to benefit from their potential. Only with such an approach can Society 5.0’s contribution to the Sustainable Development Goals have a chance of an effective outcome [9]. BFirst.Tech, as a substantive partner of the United Nations Global Compact Network Poland (UN GCNP), is also concerned with the implementation of the Sustainable Development Goals, through the specific activities it undertakes. In areas that focus on data processing, design and management, namely Data Engineering and Data Architecture & Management, our company implements goals that overlap with those targeted by Society 5.0, such as Goal 9 — on securing, aggregating and analysing big data, optimising and managing and controlling the quality of processes using AI; Goal 11 — on securing critical information that impacts on improving the lives of urban residents; and Goal 13 — on reducing resource consumption and waste emissions by increasing production efficiency.

Changes Affecting Numerous Areas

With the implementation of the Society 5.0 concept, many various facets of society can be modernised. As mentioned earlier, one of these is healthcare. With Japan’s ageing population, the country is currently grappling with rising expenses and the need to care for seniors. Society 5.0 solves this problem by introducing Artificial Intelligence, which collects and then analyses patient data to provide the highest level of diagnosis and treatment. Remote medical consultations, in turn, positively impact the convenience of the elderly, giving them the possibility of contacting a doctor even from their own place of residence.

Another facet is mobility. Most rural areas of Japan do not have access to public transport, influenced in part by a declining population contributing to an increasingly sparsely populated area. The growing shortage of drivers, linked to the ever-expanding e-commerce sector, is also a problem. The solution that Society 5.0 proposes to these issues is the implementation of autonomous vehicles such as taxis and buses. What is also worth mentioning is the area of infrastructure. In Society 5.0, it will involve sensors, AI and robots that will autonomously control and maintain roads, tunnels, bridges and dams. The final area worth mentioning is financial technology (FinTech). In Japan, the majority of monetary transactions are still carried out using cash or banking procedures, which can take far too long. Society 5.0 proposes the implementation of Blockchain technology for monetary transactions and the introduction of universal smartphone payments available everywhere [10]. 


Society 5.0 is the concept of a society that uses advanced technologies to create a society based on sustainability, social innovation and digital transformation. The aim of Society 5.0 is not only to achieve economic growth, but also to improve the quality of life of citizens. There are also some challenges behind the development of this idea, mainly related to data security, or the introduction of appropriate regulations to ensure a transition that will be smooth and comfortable for all. Society 5.0 largely shares a vision of the future with the Sustainable Development Goals (SDGs) announced by the United Nations — many of the SDG targets can be achieved through the implementation of this concept. Society 5.0 encompasses a wide range of areas of society, including healthcare, mobility, infrastructure and financial technology. Through advanced technologies in these areas, the aim is to create a sustainable and innovative society that will positively impact citizens’ quality of life.


[1] [Accessed: 7 March 2024]. 






[7] Atsushi Deguchi, Chiaki Hirai, Hideyuki Matsuoka, Taku Nakano, Kohei Oshima, Mitsuharu Tai, Shigeyuki Tani “What is Society 5.0?” 





What makes some websites appear immediately after entering a search query, while others disappear in the midst of other sites? How can we make it easier for users to find our website? SEO is responsible for these and other aspects, and it has nothing to do with randomness.  Whether you are just starting your journey with running a website or have been doing it for a long time, whether you handle everything yourself or delegate it to someone else, it’s important to know the basic principles of SEO. After reading this article, you will learn what SEO is, what it consists of, and how to use it properly. 

What is SEO?

Let’s start with what SEO actually is and what it consists of. SEO (Search Engine Optimization) is a set of activities undertaken to improve the positioning of a website in search results [1]. It consists of various practices and strategies, such as proper text editing and building a link profile. SEO also involves adapting the website to algorithms used by search engines. These algorithms determine which pages will be displayed on the first page of search results and in what order. Through optimization, a website can gain a better position in the search results, which increases its visibility.

It is important to remember, of course, that SEO tools are only one way to improve the popularity of a website. It doesn’t produce results as quickly as, for example, paid advertising, but it’s relatively inexpensive. Furthermore, the achieved effect will last longer and won’t disappear after a subscription expires, as is the case with many other marketing techniques.

On-site positioning

We can divide SEO into two types: on-site and off-site. On-site SEO includes all activities that take place on a specific website. These are all editorial, technical, or other issues that affect content loading speed. By taking care of these aspects, the website is more readable for both the user and Google’s robots. Good on-site SEO requires attention to:

  • Metadata and ALT description – even if a page is readable for users, what about search engine algorithms? To make it readable for them as well, it’s worth taking care of meta titles and descriptions, which will help search engines find our website. In addition, it is also worth taking care of ALT descriptions, also known as alternative text. Algorithms don’t understand what’s in images. With this short description, they will be able to assign its content to the searched phrase and improve positioning. 
  • Header – this is another thing that affects more than just human perception. Proper distribution of headers and content optimization in them can significantly contribute to improved positioning. 
  • Hyperlinks – the set of links, also known as the link profile. Here we can distinguish between external and internal linking. External linking refers to links coming from websites other than our own and is considered off-site SEO. On the other hand, internal linking refers to links within a single website that redirect users to other tabs or articles. 

Off-site positioning

Off-site SEO refers to all activities undertaken outside the website to increase its visibility and recognition on the web. This helps generate traffic to the site from external sources. Such activities include:

  • Hyperlinks – again, a link profile that builds a site’s popularity and recognition on the web. Off-site SEO includes external linking, i.e. from other sources. It is worth ensuring that these are of good quality, i.e. from reliable sources. Gone are the days when only quantity mattered. Nowadays, search engine algorithms pay much more attention to value.
  • Internet marketing – this includes activities such as running profiles on social media, engaging in discussions with users on forums, or collaborating with influencers. These aspects do not directly affect search results but can indirectly contribute a great deal to boosting the number of queries about our website. 
  • Reviews – after some time, opinions about a website or business naturally appear on the web. It’s worth taking care of them and responding to users who leave them. Maintaining a good customer opinion is one aspect of building a trustworthy brand image [3].

Link building and positioning

Link building is the process of acquiring links that will lead to our website. These can be links from external sources (so-called backlinks) or internal linking. In that case, we are talking about links that will redirect us within a given website. A well-built link profile significantly affects positioning, as discussed above [4]. However, how has the significance of such practices changed? 

For many years, Google allowed SEO practitioners a lot of leeway in this regard. It was commonplace to encounter sites that had hundreds of thousands of links leading to them because the number of links had a significant impact on positioning, and their quality was not as crucial. The vast majority of these were low-quality links, which were posted online in forums, guestbooks, directories, comments, etc. This was often not handled by a human, but special applications were used that did it automatically. This approach brought significant results and could be carried out relatively inexpensively. But not for long. This all changed in April 2012. There was a kind of revolution back then – Google introduced a new algorithm called Penguin.

How did Penguin change SEO?

What is Penguin? It is an algorithm created by Google and introduced on 24th April 2012, to combat unethical SEO practices. SEO specialists tried to trick Google’s script by buying links and placing them in inappropriate places, but Penguin effectively caught them. 

Let’s try to answer how Penguin works. This script analyses the links leading to a particular website and decides on their value. If it deems them to be of low quality, it will lower the rankings of the sites they lead to. Such links include purchased ones (also from link exchanges) or those created by bots. It will also do the same for spam links, such as those placed in forum comments or on completely unrelated websites. However, its action is not permanent – when low-quality links are removed, a given website can regain its position. It’s worth mentioning that Penguin was not created only to detect fraud and reduce the visibility of websites in search results. Its role is also to reward honestly conducted websites. If it deems the link profile valuable, it will increase the visibility of such sites [6].

Ethical and unethical positioning

Depending on what we base our SEO techniques on, a distinction can be made between White Hat SEO and Black Hat SEO. These terms allude to the good and evil characters in western tales. According to culturally accepted convention, the characters usually wore white and black hats respectively, hence the association. But what do they mean and how do these techniques differ?

White Hat SEO is ethical SEO, applied according to guidelines recommended by search engines. It involves procedures such as creating good quality content (free of duplicates). Using headings, bullet points and ensuring paragraphs are the right length is also important. Black Hat SEO, on the other hand, is characterized by unethical behavior aimed at artificially boosting popularity. These include practices such as overusing key phrases out of context, hiding text or buying links. Such actions can result in a decrease in trust in the site and the imposition of filters lowering its position. Even exclusion from search results is possible[7].


The key to increasing traffic to a website and improving its positioning is the skilful use of SEO tools. These are both on-site and off-site techniques that can significantly increase reach. When using SEO, it is important to remember to do it properly. By following the recommendations of search engines and adapting the content to both the user and the algorithms, we can count on positive results and improved statistics. Unethical practices, on the other hand, can lead to the opposite effect.









Moral dilemmas associated with Artificial Intelligence

Artificial intelligence is one of the most exciting technological developments of recent years. It has the potential to fundamentally change the way we work and use modern technologies in many areas. We talking about text and image generators, various types of algorithms or autonomous cars. However, as the use of artificial intelligence becomes more widespread, it is also good to be aware of the potential problems it brings with it. Given the increasing dependence of our systems on artificial intelligence, how we approach these dilemmas could have a crucial impact on the future image of society. In this article, we will present these moral dilemmas. We will also discuss the problems associated with putting autonomous vehicles on the roads. Next we will jump to the dangers of using artificial intelligence to sow disinformation. Finaly, it will come to te concerns about the intersection of artificial intelligence and art.

The problem of data acquisition and bias

As a rule, human judgements are burdened by a subjective perspective; machines and algorithms are expected to be more objective. However, how machine learning algorithms work depends heavily on the data used to teach the algorithms. Therefore, data selected to train an algorithm with any, even unconscious bias, can cause undesirable actions by the algorithm. Please have a look at our article for more information on this topic.

Levels of automation in autonomous cars

In recent years, we have seen great progress in the development of autonomous cars. There has been a lot of footage on the web showing prototypes of vehicles moving without the driver’s assistance or even presence. When discussing autonomous cars, it is worth pointing out that there are multiple levels of autonomy. It is worth identifying which level one is referring to before the discussion. [1]

  • Level 0 indicates vehicles that require full control of the driver, performing all driving actions (steering, braking acceleration, etc.). However, the vehicle can inform the driver of hazards on the road. It will use systems such as collision warning or lane departure warnings to do so. 
  • Level 1 includes vehicles that are already common on the road today. The driver is still in control of the vehicle, which is equipped with driving assistance systems such as cruise control or lane-keeping assist. 
  • Level 2, in addition to having the capabilities of the previous levels, is – under certain conditions – able to take partial control of the vehicle. It can influence the speed or direction of travel, under the constant supervision of the driver. The support functions include controlling the car in traffic jams or on the motorway. 
  • Level 3 of autonomy refers to vehicles that are not yet commercially available. Cars of this type are able to drive fully autonomously, under the supervision of the driver. The driver still has to be ready to take control of the vehicle if necessary. 
  • Level 4 means that the on-board computer performs all driving actions, but only on certain previously approved routes. In this situation, all persons in the vehicle act as passengers. Although, it is still possible for a human to take control of the vehicle. 
  • Level 5 is the highest level of autonomy – the on-board computer is fully responsible for driving the vehicle under all conditions, without any need for human intervention. [2] 

Moral dilemmas in the face of autonomous vehicles

Vehicles with autonomy levels 0-2 are not particularly controversial. Technologies such as car control on the motorway are already available and make travelling easier. However, the potential introduction of vehicles with higher autonomy levels into general traffic raises some moral dilemmas. What happens when an autonomous car, under the care of a driver, is involved in an accident. Who is then responsible for causing it? The driver? The vehicle manufacturer? Or perhaps the car itself? There is no clear answer to this question.

Putting autonomous vehicles on the roads also introduces another problem – these vehicles may have security vulnerabilities. Something like this could potentially lead to data leaks or even a hacker taking control of the vehicle. A car taken over in this way could be used to deliberately cause an accident or even carry out a terrorist attack. There is also the problem of dividing responsibility between the manufacturer, the hacker and the user. [3]

One of the most crucial issues related to autonomous vehicles is the ethical training of vehicles to make decisions. It is expecially important in the event of danger to life and property. Who should make decisions in this regard – software developers, ethicists and philosophers, or perhaps country leaders? These decisions will affect who survives in the event of an unavoidable accident. Many of the situations that autonomous vehicles may encounter will require decisions that do not have one obvious answer (Figure 1). Should the vehicle prioritise saving pedestrians or passengers, the young or the elderly? How important is it for the vehicle not to interfere with the course of events? Should compliance with the law by the other party to the accident influence the decision? [4]

An illustration of one of the situations that autonomous vehicles may encounter

Fig. 1. An illustration of one of the situations that autonomous vehicles may encounter. Source:  

Deepfake – what is it and why does it lead to misinformation?

Contemporary man using modern technology is bombarded with information from everywhere. The sheer volume and speed of information delivery means that not all of it can be verified. This fact enables those fabricating fake information to reach a relatively large group of people. This allows them to manipulate their victims into changing their minds about a certain subject or even attempt to deceive them. Practice like this has been around for some time but it did not give us such moral dilemmas. The advent of artificial intelligence dramatically simplifies the process of creating fake news and thus allows it to be created and disseminated more quickly.

Among disinformation techniques, artificial intelligence has the potential to be used particularly effectively to produce so-called deepfakes. Deepfake is a technique for manipulating images depicting people, relying on artificial intelligence. With the help of machine learning algorithms, modified images are superimposed on existing source material. Thereby, it is creating realistic videos and images depicting events that never took place. Until now, the technology mainly allowed for the processing of static images, and video editing was far more difficult to perform. The popularisation of artificial intelligence has dissolved these technical barriers, which has translated into a drastic increase in the frequency of this phenomenon. [5]

Video 1. Deepfake in the form of video footage using the image of President Obama.

Moral dilemmas associated with deepfakes

Deepfake could be used to achieve a variety of purposes. The technology could be used for harmless projects, for example educational materials such as the video showing President Obama warning about the dangers of deepfakes (see Figure 2). Alongside this, it finds applications in the entertainment industry, such as the use of digital replicas of actors (although this application can raise moral dilemmas), an example of which is the use of a digital likeness of the late actor Peter Cushing to play the role of Grand Moff Tarkin in the film Rogue One: A Star Wars Story (see Figure 2).

A digital replica of actor Peter Cushing as Grand Moff Tarkin

Fig. 2. A digital replica of actor Peter Cushing as Grand Moff Tarkin. Source: 

However, there are also many other uses of deepfakes that have the potential to pose a serious threat to the public. Such fabricated videos can be used to disgrace a person, for example by using their likeness in pornographic videos. Fake content can also be used in all sorts of scams, such as attempts to extort money. An example of such use is the case of a doctor whose image was used in an advertisement for cardiac pseudo-medications, which we cited in a previous article [6]. There is also a lot of controversy surrounding the use of deepfakes for the purpose of sowing disinformation, particularly in the area of politics. Used successfully, fake content can lead to diplomatic incidents, change the public’s reaction to certain political topics, discredit politicians and even influence election results. [7]

By its very nature, the spread of deepfakes is not something that can be easily prevented. Legal solutions are not fully effective due to the global scale of the problem and the nature of social network operation. Other proposed solutions to the problem include developing algorithms to detect fabricated content and educating the public about it.

AI-generated art

There are currently many AI-based text, image or video generators on the market. Midjourney, DALL-E, Stable Diffusion and many others, despite the different implementations and algorithms underlying them, have one thing in common – they require huge amounts of data which, due to their size, can be obtained only from the Internet – often without the consent of the authors of these works.  As a result, a number of artists and companies have decided to file lawsuits against the companies developing artificial intelligence models. According to the plaintiffs, the latter are illegally using millions of copyrighted images retrieved from the Internet. Up till now, he most high-profile lawsuit is the one filed by Getty Images – an agency that offers images for business use – against Stability AI, creators of the open-source image generator Stable Diffusion. The agency accuses Stability AI of copying more than 12 million images from their database without prior consent or compensation (see Figure 3). The outcome of this and other legal cases related to AI image generation will shape the future applications and possibilities of this technology. [8]

An illustration used in Getty Images' lawsuit showing an original photograph and a similar image with a visible Getty Images watermark generated by Stable Diffusion. Graphic shows football players during a match.

Fig. 3. An illustration used in Getty Images’ lawsuit showing an original photograph and a similar image with a visible Getty Images watermark generated by Stable Diffusion. Source:  

In addition to the legal problems of training generative models on the basis of copyrighted data, there are also moral dilemmas about artworks made with artificial intelligence. [9]

Will AI replace artists?

Many artists believe that artificial intelligence cannot replicate the emotional aspects of art that works by humans offer. When we watch films, listen to music and play games, we feel certain emotions that algorithms cannot give us. They are not creative in the same way that humans are. There are also concerns about the financial situation of many artists. These occur both due to not being compensated for the created works that are in the training collections of the algorithms, and because of the reduced number of commissions due to the popularity and ease of use of the generators. [10]

On the other hand, some artists believe that artificial intelligence’s different way of “thinking” is an asset. It can create works that humans are unable to produce. This is one way in which generative models can become another tool in the hands of artists. With them they will be able to create art forms and genres that have not existed before, expanding human creativity.

The popularity and possibilities of generative artificial intelligence continue to grow. Consequently, there are numerous debates about the legal and ethical issues surrounding this technology. It has the potential to drastically change the way we interact with art.


The appropriate use of artificial intelligence has the potential to become an important and widely used tool in the hands of humanity. It has the potential to increase productivity, facilitate a wide range of activities and expand our creative capabilities. However, the technology carries certain risks that should not be underestimated. Reckless use of autonomous vehicles, AI art or deepfakes can lead to many problems. These can include financial or image losses, but even threats to health and life. Further developments of deepfake detection technologies, new methods of recognising disinformation and fake video footage, as well as new legal solutions and educating the public about the dangers of AI will be important in order to reduce the occurrence of these problems.












Cloud computing vs environment

The term “cloud computing” is difficult to define in a clear manner. Companies will approach the cloud differently than individuals. Typically, “cloud computing” is used to mean a network of server resources available on demand – computing power and disk space, but also software – provided by external entities, i.e. the so-called cloud providers. The provided resources are accessible via the Internet and managed by the provider, which eliminates the need for companies to purchase hardware and directly manage physical servers. In addition, the cloud is distributed over multiple data centres located in many different regions of the world, which means that users can count on lower failure rates and higher availability of their services [1].

The basic operation of the cloud

Resources available in the cloud are shared by multiple clients, which makes it possible to make better use of the available computing power and, if utilised properly, can prove to be more cost-effective. Such an approach to resource sharing may raise some concerns, but thanks to virtualisation, the cloud provides higher security than the traditional computing model. Virtualisation makes it possible to create simulated computers, so-called virtual machines, which behave like their physical counterparts, but reside on a single server and are completely isolated from each other. Resource sharing and virtualisation allow for efficient use of hardware and ultimately reduce power consumption by server rooms. Financial savings can be felt thanks to the “pay-as-you-go” business model commonly used by providers, which means that users are billed for actually used resources (e.g. minutes or even seconds of used computing time), as opposed to paying a fixed fee. 

The term “cloud” itself originated as a slang term. In technical diagrams, network and server infrastructure is often represented by a cloud icon [2]. Currently, “cloud computing” is a generally accepted term in IT and a popular computing model. The affordability of the cloud and the fact that users are not required to manage it themselves mean that this computing model is being increasingly preferred by IT companies, which has a positive impact on environmental aspects [3].

Lower power consumption 

The increasing demand for IT solutions leads to increased demand for electricity – a strategic resource in terms of maintaining the cloud. A company maintaining its own server room leads to significant energy expenditure, generated not only by the computer hardware itself but also by the server room cooling system. 

Although it may seem otherwise, larger server rooms which process huge amounts of data at once are more environmentally friendly than local server rooms operated by companies [4]. According to a study carried out by Accenture, migrating a company to the cloud can reduce power consumption by as much as 65%. This stems from the fact that cloud solutions on the largest scale are typically built at dedicated sites, which improves infrastructure organisation and resource management [5]. Providers of large-scale cloud services can design the most effective cooling system in advance. In addition, they make use of modern hardware, which is often much more energy-efficient than the hardware used in an average server room. A study conducted in 2019 revealed that the AWS cloud was 3.6 times more efficient in terms of energy consumption than the median of the surveyed data centres operated by companies in the USA [6].

Moreover, as the cloud is a shared environment, performance can be effectively controlled. The scale of the number of users of a single computing cloud allows for a more prudent distribution of consumed energy between individual cases. Sustainable resource management is also enabled by our Data Engineering product, which collects and analyses data in order to maximise operational efficiency and effectiveness.

Reduction of emissions of harmful substances

Building data processing centres which make use of green energy sources and are based on low-emission solutions makes it possible, among others, to control emissions of carbon dioxide and other gases which contribute to the greenhouse effect. According to data presented in the “The Green Behind Cloud” report [7], migrating to public cloud can reduce global carbon dioxide emissions by 59 million tonnes per year, which is equivalent to removal of 22 million cars from the roads.

It is also worth considering migration to providers which are mindful of their carbon footprint. For example, the cloud operated by Google is fully carbon-neutral through the use of renewable energy, and the company promises to use only zero-emission energy around the clock in all data centres by 2030 [8]. The Azure cloud operated by Microsoft has been carbon-neutral since 2012, and its customers can track the emissions generated by their services using a special calculator [9].

Reduction of noise related to the use of IT hardware  

Noise is classified as environmental pollution. Though at first glance it may appear quite inconspicuous and harmless, it has a negative impact on human health and the quality of the environment. With respect to humans, it increases the risk of such diseases as cancer, myocardial infarction and arterial hypertension. With respect to the environment, it leads to changes in animal behaviour and affects bird migration and reproduction.

The main source of noise in solutions for storing data on company servers is a special cooling system which maintains the appropriate temperature in the server room. Using cloud solutions makes it possible to reduce the noise emitted by cooling devices at workplaces, which helps limit environmental noise pollution.

If you want to learn more about the available solutions for reducing industrial noise, check our Intelligent Acoustics product.

Waste level reduction 

Making use of cloud computing in business activities, as opposed to having traditional servers as part of company resources, also helps reduce the amount of generated electronic waste. This stems primarily from the fact that cloud computing does not necessitate the purchase of additional equipment or preparation of infrastructure for a server room at the company, which reduces the amount of equipment that needs to be disposed of in the long term.  

In addition, the employed virtualisation mechanisms, which entail the replacement of a larger number of low-performance servers with a smaller number of high-performance servers which are able to use this performance more effectively, optimise and increase server efficiency, and thus reduce the demand for hardware resources.  


Sustainability is currently an important factor in determining the choice of technology. Environmental protection is becoming a priority for companies and for manufacturers of network and telecommunications devices, which means that greener solutions are being sought. Cloud computing definitely fits this trend. It not only limits the consumption of hardware and energy resources, but also reduces the emission of harmful substances into the ecosystem as well as noise emissions into the environment.  





[4] Paula Bajdor, Damian Dziembek “Środowiskowe i społeczne efekty zastosowania chmury obliczeniowej w przedsiębiorstwach” [“Environmental and Social Effects of the Use of Cloud Computing in Companies”], 2018 


[6] “Reducing carbon by moving to AWS”


[8] “Operating on 24/7 Carbon-Free Energy by 2030.”


Technology trends for 2021

For many people, 2020 will remain a memory they are not likely to quickly forget. The coronavirus pandemic has, in a short time, caused many companies to change their previous way of operating, adapting to the prevailing conditions. The issue of employee safety has become crucial, hence many companies have decided to turn to remote working mode. There is no denying that this situation has accelerated the digital transformation process in many industries, thus contributing to the faster development of modern technologies.

As they do every year, the major analyst firms publish rankings in which they present their new technology predictions for the coming year.

Internet of Behaviours

The concept of the Internet of Behaviour (IoB) emerged some time ago, but, according current for forecasts, we are going to see significant growth in 2021 and beyond. It involves collecting data about users and linking it to specific types of behaviour. The aim is to improve the process of customer profiling and thus consciously influence their behaviour and decisions they make. IoB employs many different modern technologies – from AI to facial or speech recognition. When it comes to IoB, the safety of the collected data is definitely a moot point. On top of that there are ethical and social aspects of using this data to influence consumers.


Because of the COVID-19 pandemic lot of companies now operate in remote working mode. Therefore, the question of cyber security has now become more important than ever. Currently, this is a key element in ensuring the safe operation of the organisation. With the popularisation of remote working, cyber threats have also increased. It is, therefore, anticipated that companies will invest in strengthening their security systems to make sure that their data is protected and to prevent possible cyber-attacks.

Anywhere operations

Anywhere operations model is the biggest technology trend of 2021. It is about creating an IT environment that will give people the opportunity to work from just about anywhere by implementing business solutions based on a distributed infrastructure. This type of solution will allow employees to access the organisation’s resources regardless of where they are working and facilitate the exchange and flow of information between them. According to Gartner’s forecasts, as much as 40% of organisations will have implemented this operating model in their organisation by 2023.

AI development

The list the biggest modern technologies trends of 2021 would not be complete without artificial intelligence, the steady development of which we’re constantly experiencing. AI solutions such as forecasting, speech recognition or diagnostics are used in many different industries. Machine learning models are also increasingly popular in factories, helping to increase the efficiency of their processes. Over the next few years, we will see the continued development of artificial intelligence, and the exploitation of the potential it holds.

Total Experience

Another trend that will most likely be big this year is Total Experience (TX), which is intended to bring together the differing perspectives of customers, employees and users to improve their experience where these elements become intertwined. This approach combined with modern technology is supposed to give some companies competitive edge. As a result of the pandemic most of the interactions among the aforementioned groups happens online. This is why it is so important for their respective experiences to bring them certain kind of satisfaction, which will have an actual impact on the companies’ performance.

This year’s technology trends mainly focus on the development of solutions aimed at improving remote working and the experience of moving much of our lives to the online sphere. There is no denying that the pandemic has significantly accelerated the technological development of many companies. This rings particularly true for the micro-enterprises that have had to adapt to the prevailing conditions and have undergone a digital transformation. An important aspect among the projected trends is undeniably providing cyber security, both for organisations and individuals. BFirst.Tech seeks to adapt to the growing demand for these issues, which is why it offers a Cloud and Blockchain service that employs modern technology to create secure data environments.






Data Warehouse

A data warehouse is one of the more common topics in the IT industry. The collected data is an important source of valuable information for many companies, thus increasing their competitive advantage. More and more companies use Business Intelligence (BI) systems in their work, which quickly and easily support the analytical process. BI systems are based on data warehouses and we will talk about them in today’s article.

What is a data warehouse?

A data warehouse is one of the more common topics in the IT industry. The collected data is an important source of valuable information for many companies, thus increasing their competitive advantage. More and more companies use Business Intelligence (BI) systems in their work, which quickly and easily support the analytical process. BI systems are based on data warehouses and we will talk about them in today’s article.


There are four main features that characterize a data warehouse. These are:

  • Subject orientation – the collected data is organized around main topics such as sales, product, or customer;
  • Integrity – the stored data is uniform, e.g. in terms of format, nomenclature, and coding structures. They are standardized before they reach the warehouse;
  • Timeliness – the data comes from different time frames, it contains both historical and current data;
  • Non-volatile – the data in the warehouse remains unchanged. The user cannot modify it, so we can be sure that we will get the same results every time.

Architecture and operation

In the architecture of a data warehouse, four basic components can be distinguished. Data sources, ETL software, the appropriate data warehouse, and analytical applications. The following graphic shows a simplified diagram of that structure.

Data warehouse graph
Img 1 Diagram of data warehouse operation

As can be seen from the graphic above, the basis for building each data warehousing system is data. The sources of this data are dispersed – they include ERP, CRM, SCM, or Internet sources (e.g. statistical data).

The downloaded data is processed and integrated and then loaded into a proper data warehouse. This stage is called the ETL process, from the words: extract, transform and load. According to the individual stages of the process, data is first taken from available sources (extract). In the next step, the data is transformed, i.e. processed in an appropriate way (cleaning, filtering, validation, or deleting duplicate data). The last step is to load the data to the target database, i.e. the data warehouse.

As we mentioned earlier, the data collected is read-only. Users call data from the data warehouse using appropriate queries. On this account, obtaining data is presented in a more friendly form, i.e. reports, diagrams, or visualizations.

Main tasks

As the main task of a data warehouse, analytical data processing (OLAP, On-Line Analytical Processing) should be distinguished. It allows for making various types of summaries, reports, or charts presenting significant amounts of data. For example, a sales chart in the first quarter of the year, a report of products generating the highest revenue, etc.

The next task of that tool is decision support in enterprises (DSS, Decision Support System). Taking into account the huge amount of information that is in the data warehouses, they are a part of the decision support system for companies. Thanks to advanced analyses conducted with the use of these databases, it is much easier to search for dominant trends, models, or relations between various factors, which may facilitate managerial decision-making.

Another of the tasks of these specific databases is to centralize data in the company. Data from different departments/levels of the company are collected in one place. Thanks to that, everyone interested has access to them whenever he or she needs them.

Centralization is connected with another role of a data warehouse, which is archiving. Because the data collected in the warehouse comes from different periods and the warehouse is supplied with new, current data on an ongoing basis, it also becomes an archive of data and information about the company.


Data warehousing is undoubtedly a useful and functional tool that brings many benefits to companies. Implementation of this database in your company may facilitate and speed up some of the processes taking place in companies. An enormous amount of data and information is generated every day. Therefore, data warehouses are a perfect answer to store this information in one, safe place, accessible to every employee. If you want to introduce a data warehousing system to your company, check our product Data Engineering.



Safety of IoT devices

The Internet of Things (IoT) is entering our lives at an increasingly rapid pace. Control of lighting or air conditioning commanded by smartphones is slowly becoming an everyday reality. Additionally, many companies more and more willingly introduce to their processes the solutions provided by IoT. According to the latest forecasts, by 2027 41 billion IoT devices will be connected to the internet. There is no doubt that IoT offers great opportunities. However, at the same time, there is no denying that it can also bring whole new threats. It is therefore worthwhile to be aware of the dangers that may be associated with the use of IoT.

The total number of device installations for IoT is growing every year
Img 1 The total number of device installations for IoT


Hacking attacts

An extensive network of IoT devices creates many opportunities for hacking attacks. Whereby the space that could potentially be attacked increases with the amount of IoT devices in operation. It is enough that the attacker will hack into one of these devices and gain access to the entire network and to the data that flows through. This poses a real threat to both individuals and companies.

The loss of data

The loss of data is one of the most frequently mentioned threats posed by IoT. Improper storage of sensitive data such as names, addresses, PESEL (personal identity number), or payment card numbers can expose us to the danger of being used in an undesirable way for us (e.g. taking credit, stealing money). Moreover, based on data collected by home IoT devices, the attacker can easily learn about the habits of the household, which can facilitate sophisticated scams.

Botnet attact

Another threat is the risk of the IoT device being included in the so-called botnet. The botnet is a network of infected devices that hackers can use to carry out various types of attacks. Most often a common botnet attack is a DDoS attack (Distributed Denial of Service). It consists of combining the website with multiple devices at the same time, which can lead to its temporary unavailability. Another example of how a botnet works is the use of infected devices to send spam or produce a crypto valent. All these attacks are carried out in a manner unnoticeable to the owner of the device. It is enough that we click on a link from an unknown source that may contain malware. Then we unconsciously become part of a botnet attack.

Attacts on machines

From a company’s point of view, attacks on industrial robots and machines, which are connected to the network, can be a significant threat. Taking over control of such devices can cause serious damage to companies. For example, hackers can change the production parameters of a component in such a way that they will not be caught right away, but it will make this component useless. Attackers can also cause disturbances in the operation of machines or interruptions in energy supply. These activities are a serious threat to companies, that could suffer huge financial losses as a result.

How can we protect ourselves?

It may seem that it is impossible to eliminate the dangers of using IoT technology. However, there are solutions that we can implement to increase the safety of our devices. Here are some of them:

Strong password

An important aspect in the security of IoT devices is password strength. Very often users have simple passwords, containing data that is easy to identify (e.g. names or date of birth). It often happens that the password is the same for several devices, making it easier to access them. Also, sometimes users do not change the standard password that is set by the manufacturer of the device. It is therefore important that the password is not obvious. Increasingly often, manufacturers force users to have strong passwords by setting the conditions they must meet. It is demanded to use upper and lower-case letters, numbers, and special characters. This is a very good practice that can increase security on the network.

Software update

Another way is to regularly update the software used by IoT devices. If manufacturers will detect a vulnerability in their security, they can protect users from a potential attack. They can provide them with a new version of the software that eliminates the deficiencies detected. Ideally, the device should be set for automatic system updates. Then we can be sure that the device always works on the latest software version.

Secure home network

Securing your home network is as important as setting a strong access password. In this case, it is also recommended to change the original password set by the router provider. Additionally, the home Wi-Fi network should use an encrypted connection such as WPA2-PSK.

Consumptionary restraint

Before buying a given device, it is good to consider whether we need it. There is no point in treating it more just like a cool gadget. Let’s remember that every subsequent IoT device in our environment increases the risk of a potential attack.

All the above-mentioned actions are the ones, which should be taken by users of IoT devices. However, the manufacturer of the device also takes care of its protection, such as via the encryption of network messages, which secures the interception of data during transport is on its side. The most commonly used protection is the TLS protocol (Transport Layer Security). TLS protocol helps secure the data that is transmitted over the network. In addition, the manufacturer of the device should regularly check its security features, so that it will be able to catch any gaps and eliminate them. It is also good to keep the devices secure from the beginning before automatic connection to open public networks.

In June 2019 the Cybersecurity Act was established, which aims at strengthening the cyber security of EU Member States. It regulates the basic requirements to be met by products connecting to the network, which contributes to the safety of these devices. Rapid IoT development makes more similar regulations, which will significantly contribute to maintaining global cyber security.


The advent of IoT technology has brought a huge revolution, both for individuals and for the whole of companies. Although IoT brings many benefits and facilitations, you must also be aware that it may pose a threat to the security of our data or ourselves. However, it is worth remembering that compliance with a few of our principles can make a significant contribution to the safety of your IoT equipment.







Internet of Things

IoT is a broad term, often defined in different ways. To get a good understanding of what the Internet of Things actually is, it’s best to break the term down into few parts.

What is referred to as a “Thing” in the Internet of Things are objects, animals and even people equipped with smart devices (sensors) to collect certain information. So that thing could be either a fridge that uses a smart module or an animal with a smart band applied to it that monitors its vital functions. Devices communicate to send and receive data. In order for them to communicate, they need a network connection, and this is referred to as the “Internet” in IoT. This connection can be made with a variety of data transmission technologies. We can mention Wi-Fi, 5G networks, Bluetooth, as well as more specialised protocols such as Zigbee, which, thanks to its low power consumption, is great for IoT devices where lifespan is of key importance, or Z-Wave often used in smart building systems.

It’s a good idea to mention here that not every IoT device needs to have direct access to the Internet. The data collected by IoT devices is then uploaded and analysed. In order to efficiently collect and analyse large data sets, as well as to ensure high system scalability, cloud technologies are often used. In this case, Internet of Things devices can send data to the cloud via an API ( (API gateway). This data is then processed by various software and analytical systems. Big Data, artificial intelligence and machine learning technologies are used to process data.

IoT applications

IoT has many various applications, using household items, lighting or biometric devices, to name a few.

Internet of Things
Figure 1 Internet of Things

The figure above shows 101 terms related to the Internet of Things, divided into categories. It’s plain to see that there are many technologies associated with IoT, ranging from connectivity issues, data processing and analysis to security and IoT network architecture. We will not describe the above-mentioned technologies in this article, but we should bear in mind what an immensely extensive field IoT is and how many other technologies are involved.

The Internet of Things is developing at a very fast pace, recording high annual growth rates. According to various estimates, the IoT market will grow at a rate of 30 per cent in the next few years, and in Poland this rate could reach up to 40 per cent. By 2018, there were around 22 billion connected Internet of Things devices, and it is estimated that this number could be up to as many as 38.6 billion devices by 2025.

The Internet of Things in the future

The Internet of Things is finding its way into more and more areas of our lives. Household goods and lighting items are things we use pretty much every day. If we add some “Intelligence” to ordinary objects, it becomes easier to manage the entire ecosystem of our home or flat. As a result, we will be able to optimise the costs of equipment wear and tear and their working time. The collection of huge amounts of data, which will then be processed and analysed, is expected to bring about even better solutions in the future. In recent years, it’s often been mentioned that “Data is the gold of the 21st century.” and IoT is also used to collect this data. With IoT progressing like that, it won’t be long before smart devices are with us in the vast majority of our daily activities.

Controversy around the Internet of Things

The development of the Internet of Things will bring many changes to everyday life. The biggest problem with this is security. Because of the amount of data collected by devices, which very often have no or very low levels of security, exposes the user to breach or having no control over such data. Another issue is the dispute over who should have access to the data. Questions of morality are raised here, such as whether large corporations should be able to eavesdrop on the user on a daily basis. The companies explain their modus operandi by the fact that the data collected is a tool for the development of the offered services.

Opponents, on the other hand, see it the other way around, considering an intrusion into user privacy and uncertainty with where the collected data may end up. However, a new avenue is emerging, namely –  the use of blockchain technology to securely store data in the IoT network. By using a decentralised blockchain network, there will be no central entity with control over user data. The technology also ensures the non-repudiation of the data, meaning the certainty that the data has not been modified by anyone.

Who will benefit form the Internet of Things?

IoT is targeting different industries. Solutions are being developed for both the consumer market and the business market. The companies involved in this area will have a substantial platform to develop their solutions. The upcoming revolution will also change many areas of our lives. Also, the ordinary user will also get something out it, as he or she will have access to many solutions that will make his or her life easier. The Internet of Things presents tremendous opportunities, but there is no denying that it can also bring entirely new risks. So – in theory – the IoT will benefit everyone. You can read more about the security of IoT devices in our article.

BFirst.Tech and IoT

As a company specialising in the new technology sector, we are not exactly sleeping on the subject of IoT either. Working with Vemmio, we are developing the design of a voice assistant to manage a house or flat in a Smart Home formula. Our solution will implement a voice assistant on the central control device of the Smart Home system. Find out more about our projects here.

With biometric authentication, the first thing that gets checked is the voice that issued the command to activate the device. If the voice authentication is positive, the device is ready to operate and issue commands through which home appliances can be managed. That’s exactly the idea behind the Smart Home. This solution makes it possible to manage a flat or smaller segments of it or even an entire building.

Individual household appliances, lighting or other things are configured with a device that helps us manage our farm. This is the technical side, where the equipment has to be compatible with the management device. This puts the control centre in one place, and today operating  entire system can be managed with a smartphone is already a standard. With the voice assistant feature, the entire system can be controlled without having to physically use the app. Brewing coffee in the coffee machine, adjusting the lighting or selecting an energy-saving programme will be all possible with voice commands.