Smart City

With increasing urbanisation around the world and increasingly important social issues such as air pollution, urban litter, the fight against climate change or over-reliance on car transport, the need to manage cities more efficiently is emerging. Modern technologies can be used to achieve this. The idea of Smart Cities is to use communication technologies to create a more interactive and efficient urban infrastructure, as well as to raise citizens’ awareness of its operation [1]. Smart Cities therefore represent a wide range of solutions that, in combination, improve the lives of residents and help combat the problems of today’s world. In the following article, we will present some of the Smart City solutions. The role of data collection in the Smart City, Smart City technologies for transport, smart energy management, as well as for combating environmental and noise pollution will be discussed.

Data collection and analysis in a Smart City  

A fundamental role in the functioning of a Smart City is the collection of data through all sorts of measurement tools such as sensors, probes and cameras. The collection of real and up-to-date data on the operation of the city is crucial to the proper functioning of Smart City solutions, as their analysis allows real-time decision-making, significantly reducing resource consumption without compromising the standard of living of the inhabitants [2]. The proper collection and analysis of the vast amount of data needed for the proper operation of Smart City systems is a huge challenge. 

BFirst.Tech specialises in the implementation of IoT technology, providing advanced solutions for smart monitoring, data analysis and optimisation of urban infrastructure. As a member of the United Nations Global Compact Network Poland and co-author of the Recommendations for Cities by the World Urban Forum 11 Business Council, the company actively supports the development of sustainable technologies, focusing on innovative diagnostics, environmental acoustics and data engineering systems. 

Smart City in transport   

One of the main areas of use of Smart City solutions is in transport. Today’s cities are able to collect far more transport data using smart tools in public transport vehicles, at important points on the road such as intersections, or through public monitoring.    

The data collected in this way can then be processed accordingly and used to improve the efficiency of the city’s transport system.  The collected information can be used to display timetable information and the current position of public transport vehicles with the estimated time of arrival at the stop, making public transport a very attractive alternative to the car. 

Rys. 1. Using the Smart City in Transport. Source: https://www.digi.com/blog/post/introduction-to-smart-transportation-benefits 

Data flowing into traffic management systems allows real-time optimisation of urban traffic to improve safety and reduce emissions. Smart parking systems make use of data on parking spaces, monitoring them and informing drivers of their availability, and allow payment for parking to be collected, improving driver comfort and also reducing pollution by reducing the time used to find a parking space [3]. 

Smart City solutions also help to solve the so-called first and last kilometre problem – the first and last part of a journey in a city, usually being considerably shorter than the public transport journey itself, while possibly taking a similar amount of time. Smart City systems can allow the linking of the public transport network with the use of lightweight short-distance transport modes such as bicycles or electric scooters. Properly placed hubs for such transport, combined with ease of use, can significantly facilitate urban travel and even encourage some drivers to use public transport [4]. 

Smart energy management  

With the increasing demand for electricity, due in part to the need to decarbonise the economy as much as possible, there is a growing emphasis not only on increasing the production of energy from renewable sources, but also on using it more efficiently. The use of intelligent energy management solutions leads to less energy consumption and therefore less energy production, which can have a major impact on environmental protection. 

Rys. 2. Green energy in the city. Source: https://leadersinternational.org/sme4smartcities-insights/revolutionising-urban-life-how-smart-technologies-and-sustainable-energy-are-creating-the-cities-of-the-future/ 

Among the Smart City systems that support better management are smart grids that monitor energy distribution and consumption, efficient systems for storing cheaply produced energy at peak production times, and smart sensors able to regulate the use of lighting according to the amount of natural light. All these solutions in combination also make it possible to create programmes that optimise when energy is used, using it mainly during the periods of lowest production costs, which is used, among other things, in the charging of electric vehicles [5].  

In addition to the above-mentioned ways of using electricity more efficiently, less energy consumption can also be influenced by technical developments and new regulations for the construction and renovation of buildings so that they use as little energy as possible. This can be done, among other things, by using efficient and environmentally friendly materials, by designing buildings to minimise heat loss while allowing as much natural light as possible, or by using intelligent systems to optimise heating and lighting consumption. 

Efficient energy management is one of the key aspects of the energy transition and the fight against progressive climate change. The transformation of cities into smart cities will require large amounts of electricity, which must be produced efficiently to contribute to better environmental protection [6].  

BFirst.Tech has become a member of the Business Council at PRECOP29, which produced a “White Paper” providing a Polish perspective on climate issues, including energy management ahead of the United Nations Climate Change Conference 2024. BFirst.Tech offers end-to-end solutions for monitoring, diagnostics and management of big data, including energy. To learn more, explore our solutions under this link

Smart City in the fight against pollution and noise  

One of the biggest problems facing modern cities is air pollution, resulting from a number of factors, such as the burning of solid fuels in cookers and urban planning. High levels of pollution affect the health of city dwellers, reducing their productivity, occupying the raw materials of health services and reducing attractiveness for business and tourists.  

In order to effectively combat air pollution, it is necessary to have accurate information on its levels and spatial distribution provided by a large number of sensors across the city. The information gathered in this way helps to make appropriate decisions on measures to improve the state of the air. In addition, properly presented information on the state of the air to residents can strengthen public awareness of the problem and increase pressure to find appropriate solutions to combat pollution [7]. 

In addition to air pollution, the problem of urban noise is also increasingly discussed. Traffic jams, renovations, construction of new buildings and other sources of noise in cities can sometimes pose a serious threat to human health [8], further worsening levels of concentration and focus, lowering the standard of living of residents. 

Rys. 3. Sources of noise for urban residents. Source: https://www.hseblog.com/noise-pollution/ 

Smart sensors that are able to estimate not only the level of noise recorded but also the source of the noise can be used to combat this problem. This data can then be processed and used by experts to prepare a plan to mitigate noise levels, thus improving the lives of residents [9]. 

BFirst.Tech is a company with many years of expert experience in implementing solutions to combat noise pollution. BFirst.Tech offers a modern and advanced approach in the field of noise reduction, in line with the needs not only of smart cities but also of modern industry. Explore our products and solutions under this link

Summary

Smart Cities make use of today’s advanced data acquisition, processing and storage techniques. Through their use, our cities are gaining new tools and techniques to combat the increasingly pressing problems of the modern world. These technologies can help not only with the problems of public transport, air pollution, noise and energy management mentioned in the article, but also with many others, among which are better prevention and crisis management, public safety or waste management. Which cities make the best use of them could be a key factor in their further development and the key to better meeting the needs of their inhabitants. 

References

[1] https://uclg-digitalcities.org/en/smart-cities-study/2012-edition/ 

[2] https://www.oecd.org/en/publications/smart-city-data-governance_e57ce301-en.html 

[3] https://www.teraz-srodowisko.pl/aktualnosci/przyszlosc-transport-smart-city-forum-11962.html 

[4] https://smartride.pl/przyszlosc-transportu-w-smart-city-komfort-podrozy-i-czyste-powietrze/ 

[5] https://energy-floors.com/10-smart-city-energy-solutions-kinetic-floors/ 

[6] https://www.teraz-srodowisko.pl/aktualnosci/inteligentne-technologie-zarzadzanie-energia-miasta-efektywnosc-energetyczna-13055.html 

[7] https://www.innovationnewsnetwork.com/the-development-of-the-smart-city-waste-management-and-air-quality-monitoring/39990/ 

[8] https://pmc.ncbi.nlm.nih.gov/articles/PMC6878772/ 

[9] https://newsroom.axis.com/blog/noise-pollution-smart-cities 

The effect of technological illusions on people’s perception of reality

Computerisation, which began in the 1990s, has propelled humanity into an era where working and interacting with technology on a daily basis is common and natural. Artificial Intelligence answers our questions, and the Internet is seen as an endless source of information. While one may think that the development of technology helps us to understand the world around us, there are phenomena that show how often our intuitions fail. Technologies, which at first glance are simple and obvious, can hide paradoxes and illusions, which may be more difficult to spot, as well as to understand, than it seems. This article will explore three interesting phenomena: the ELIZA effect, the Moravec’s paradox and the Streisand effect. Each of these shows how technology can change our perception of reality, affecting how we see machines, data and information. Exploring these phenomena will provide a different perspective on the development of technology and help us to use it more consciously. 

ELIZA effect 

In the 1960s, Joseph Weizenbaum at the Massachusetts Institute of Technology developed the ELIZA programme [1]. This programme was one of the first chatbots – it naturally mimicked a normal conversation. Despite the simplicity of the algorithm, which created responses based on the keywords entered by acting according to predetermined patterns, many users of the programme reported that they got the impression that Eliza really understood them. Thanks to the clever selection of answers, users were able to be highly engaged in the conversation, satisfied that the interviewer understood them and was paying attention. The creator himself was surprised at how convinced people were that Eliza was a human being, not a machine.  

It is from this chatbot that the ELIZA effect, the phenomenon of the tendency for humans to attribute to machines, programmes (including AI) the capabilities of understanding, empathy and intelligence, i.e. to anthropomorphise them [2], got its name. Examples of this phenomenon include the appearance of “hello” or “thank you” messages on ATM and self-checkout displays, which are pre-defined texts to be displayed rather than an expression of gratitude by the machine; or communication with voice assistants – thanking them, saying “she” about the Alexa assistant, which despite using a female voice still remains a genderless algorithm. The reason behind this effect can be attributed to our nature – everything that is human seems familiar, closer, less frightening, which can be seen, for example, in the way ancient deities are depicted and compared to humans and animals, attributing weather phenomena or elements to them [3].  

Such bonding with, sometimes very complex, technologies allows one to overcome fear of novelty, encourages interaction and builds attachment to the product being used. At the same time, this effect can cause an overestimation of the capabilities of a given algorithm (due to the assumption that the machine knows and understands more than it actually does), excessive trust in the information received, or an inappropriate treatment of the creation as a human being, e.g. by treating a chatbot like a therapist or marrying AI [4]. 

Moravec’s paradox 

Another interesting phenomenon takes its name from the Canadian scientist Hans Moravec, author of works on technology, futurology and transhumanism. In 1988, he, together with Rodney Brooks and Marvin Minsky, formulated the statement: “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility” [5]. It implies that tasks that are considered difficult, requiring knowledge, intelligence and logical thinking, are relatively easy to solve using AI, while those activities that we consider simple and natural – walking, recognising faces and objects or motor coordination – are very challenging and difficult to implement in machines.  

Researchers attribute the reason for this paradox to the human evolutionary process. Human motor skills developed over millions of years and were essential for survival and slowly but continuously improved by natural selection. The human brain has had plenty of time to assimilate and adapt to activities such as grabbing tools, recognising faces and emotions, walking or motor coordination, so they are automated at a deep level; we perform them without conscious effort. On the other hand, abstract thinking, mathematics, logic are relatively new abilities, not rooted so deeply and requiring conscious intellectual effort. Because these abilities are not ingrained so deeply in human beings, it is easier to apply reverse engineering to them and implement them in the form of a programme. In addition, computers are most effective at mapping logical, schematic processes with specific steps. For these reasons, we already have programs that are superior to humans when it comes to complex calculations, chess, simulations, but when it comes to mobility, coordination, object and face recognition, or other “basic” activities that we consider natural and simple for a child of just a few years old – the development is very slow. It is only recently that the amount of data and technology has allowed a gradual development in this area, as shown, for example, by the robotics design company Boston Dynamics [6].

Streisand effect 

Another phenomenon presented in this paper is the Streisand effect. According to this phenomenon, the more one tries to remove or censor a piece of information on the Internet, the more publicity and interest it receives. The effect owes its name to Barbra Streisand and the situation that occurred in 2003, when photographer Kenneth Adelman took photographs of the California coastline to document the progressive erosion [7]. These photographs were made public on a website dedicated to the subject of coastal erosion. Coincidentally, one of the photographs showed Barbra Streisand’s residence. She sued the photographer for invasion of privacy, demanding damages and the removal of the photograph as she did not want anyone to see it. However, it turned out to be quite the opposite – she lost the lawsuit and had to reimburse the photographer, and not only was the photograph not removed, but it received even more publicity and many more views than before the whole situation.  

This effect can be attributed to several factors, mainly based on human psychology, the role of social media and the general mechanisms of online information circulation. People are very reluctant to endure any restrictions imposed on their freedom, including access to information. Often, in situations of enforced censorship, people deliberately act out of spite – they want to get as much news about the “forbidden” information as possible, are willing to share it and spread it further. The “forbidden fruit” effect works in a similar way – by attempting to hide the information, it appears even more interesting and intriguing, even though without the attempt the message would probably have been disregarded. Nowadays, because of the ease of access to information and the multitude of different media, news is widespread and can quickly become viral, attracting huge audiences. The Internet has also changed the perception of various content. In theory, the fact that any user can save and share content makes it impossible to remove something from the Internet once it has been posted there. Given also how quickly the media seize on and publicise instances of censorship, it becomes quite obvious why an attempt to hide or cover up something usually ends up having the opposite effect. 

There are many examples of the occurrence of the Streisand effect. In 2013, after her Super Bowl performance, singer Beyonce’s publicist deemed one of the photos particularly unfavourable and attempted to remove it from the Internet. The effect was exactly the opposite; the photograph became considerably more popular than it had originally been and also began to serve as a template for internet memes. There are also many examples of the Streisand effect from the world of technology. In 2007, a user of the Digg website revealed that the Advanced Access Content System (AACS) copyright protection system used in HD DVD players could be cracked with a string known as 09 F9. Representatives of the industry using this protection demanded that the Digg post be removed and threatened legal consequences. As a result, a great deal of discussion took place on the Internet, and information about the code (which for a while was referred to as “the most famous number on the Internet”) spread heavily and was reproduced in the form of videos, t-shirt prints or even songs [8]. 

Summary

The phenomena discussed in the article show that although technologies such as Artificial Intelligence and the Internet are powerful tools, they have the potential to distort human perception and create misleading impressions. It is easy to fall into the various traps related to technology, which is why awareness of the phenomena mentioned is important, as it allows for a more critical approach towards interaction with technology and information, a better use of their potential and their healthy and sensible application.  

References

[1] https://web.stanford.edu/class/cs124/p36-weizenabaum.pdf 

[2] https://modelthinkers.com/mental-model/eliza-effect 

[3] https://builtin.com/artificial-intelligence/eliza-effect 

[4] https://www.humanprotocol.org/blog/what-is-the-eliza-effect-or-the-art-of-falling-in-love-with-an-ai  

[5] https://www.researchgate.net/publication/286355147_Moravec%27s_Paradox_Consideration_in_the_Context_of_Two_Brain_Hemisphere_Functions  

[6] https://www.scienceabc.com/innovation/what-is-moravecs-paradox-definition.html  

[7] https://www.forbes.com/2007/05/10/streisand-digg-web-tech-cx_ag_0511streisand.html  

[8] https://web.archive.org/web/20081211105021/http://www.nytimes.com/2007/05/03/technology/03code.html 

Proteus Effect – How an Avatar Influences the User

The relationship between man and technology has been a subject of philosophical interest for some time. Over the years, a number of theories have emerged that attempt to explain the reciprocal influence of man on technology and technology on man, or entire societies. Although debates between determinists (who claim that technology shapes humans) and constructivists (who argue that humans shape technology) will likely never be resolved, this article examines the Proteus effect, which may be closer to one of these perspectives.

What is the Proteus effect?

The Proteus effect is a phenomenon first described by Yee and Bailenson in 2007. It is named after the myth of the god Proteus, who could change his appearance in any way he wished. He was said to use this power to conceal his knowledge of past and future events. Yee and Bailenson noted that individuals using virtual avatars change their behaviour based on the observed traits of these characters while playing them in the virtual world. The researchers argue that players infer from the appearance and characteristics of their avatars how they should adjust their behaviour and overall attitude to meet the expectations set by their virtual representation. There are also grounds to believe that this effect can extend beyond digital worlds and influence behaviour and attitudes in the real world [1].

Proteus Effect – Example of Occurrence

To illustrate how the Proteus effect works with a real-world example, I will refer to a study in which the authors investigated the presence of the Proteus effect during matches played with various characters in the popular MOBA game, League of Legends. Participants in the game are divided into two teams of five players each, who then engage in battle on a map. Before starting, each player must choose a so-called champion. League of Legends allows players to play a match with one of over 140 champions [2], each characterised by different appearances and abilities. The authors of this study analysed how players communicate with each other, considering the champion they play.

The presence of the Proteus effect was measured using the game’s chat. Researchers established indicators such as vocality (“acting more vocal”), toxic behaviour (“acting more toxic”), and positive or negative valence. Valence is a form of sentiment analysis aimed at depicting the emotional state of a player. The analysis results confirmed the presence of the Proteus effect, but not for every champion or type of champion. It was primarily observed through valence and toxicity of speech. The most significant finding of this study was proving that the way players communicated via chat indeed changed with the champion they selected. Depending on the chosen character, a player did not necessarily speak more or less but could exhibit more toxic behaviour and be in a worse mood [3].

Utilising the effect

The Proteus effect is a phenomenon that particularly draws our attention to the relationship between people and virtual worlds. It clearly demonstrates that technology, in one way or another, exerts a direct influence on us, even altering our behaviour. Some researchers have attempted to explore whether this effect can be practically applied, for example, in performing certain jobs. Let’s delve into their studies.

Impact on strength

A group of five German researchers hypothesised that using a suitably matched avatar would cause the person controlling it to perform tasks better than if they embodied a different, non-distinctive character or themselves. In this case, the researchers decided to investigate whether a person whose virtual appearance suggests they are stronger than the subject would lead the subject to exert more effort in physical exercises. In addition to tracking the movements of participants wearing VR equipment, grip strength was also measured.

During the study, participants were assigned avatars according to their gender. They were subjected to a series of physical tasks, such as lifting weights of varying heaviness and squeezing a hand as hard as possible for five seconds. According to the results, the authors conclude that the study cannot be considered representative. No increase in grip strength was observed in women, though such results were evident in men. Thus, it can be partially inferred that a more muscular avatar may influence the strength of men [4].

Stimulating creativity 

The following study examined whether an avatar, as a representation of an individual in the virtual world, stimulates creativity. As part of the study, creativity sessions were organised during which participants brainstormed while embodying a particular character. Prior to the sessions, the researchers selected several avatars that were perceived as creative and neutral. Participants were divided into three groups: a control group (brainstorming in the real world), a group using neutral avatars, and a group using creative avatars, defined as inventors.  All groups held creative sessions in the same rooms—the control group gathered around a round table, while the others used equipment in the same room in separate cubicles. They then sat at a round table in a recreated space in virtual reality. 

The left part shows a room with a round table and chairs around it in a virtual space. The right part shows the prototype in the real world.
Figure 2. On the left, the virtual space with a round table and workstations recreated in virtual reality. On the right, its real-world counterpart. [5]

The researchers avoided any contact between the participants in the avatar groups before and after the main part of the brainstorming session took place; the subjects never met each other outside the experiment. A key finding, particularly relevant for the future of remote collaboration, is that the groups using non-creative avatars achieved the same results as those sitting at the table in the real world. However, the most important result is the demonstration that individuals embodying an inventor avatar consistently achieved better results for each creativity indicator used in the experiment [5].

Assistance in improving communication

Another study was conducted to explore the potential for training effective communication skills among physicians in the preoperative stage. Communication with patients can be ineffective, partly because doctors may use jargon or phrases from their professional environment. This study utilised two virtual reality experiences. During the experience, participants played the role of a patient. This enabled the researchers to describe the development and impressions that the subjects experienced.

During the experiment, participants experienced negative or positive communication styles in a situation where they were about to undergo surgery. Interviews conducted at the next research stage revealed that participants recognised the importance of good communication skills. Overall, the participants learned and adjusted their communication style in their subsequent work. Virtual reality, in which participants embodied a patient in one of the two experiences, proved effective in providing a fully immersive experience. As participants stated, they felt as if they were the patient. It can be further concluded from this study that the Proteus effect is also useful for educational purposes, improving communication, and increasing empathy towards others [6].

Summary

In the face of continuous technological development, we constantly discover new phenomena that can shape our future approach to technology. The Proteus effect demonstrates that its impact can be much more direct than we may assume. Although this phenomenon is largely harmless, it indicates how we can be influenced by our virtual representation. People have already begun exploring applications of this effect in various areas, such as mental enhancement of strength, supporting creative processes, and improving communication skills. However, to ascertain whether the Proteus effect will become a permanent aspect of our daily lives, we will need to wait and see. Additionally, it is worth noting that Microsoft has begun organising international conferences in virtual reality, utilising avatars for participation. Polish entrepreneur Gryń—former owner of Codewise—has established a company in London to scan people for such purposes. At BFirst.Tech, leveraging its expertise in Data Architecture & Management—specifically through its Artificial Intelligence Adaptations product—a project has been completed for the Rehasport clinic network, enabling surgeries to be conducted in augmented reality (AR).

References

[1] The Proteus Effect: The Effect of Transformed Self‐Representation on Behavior: https://academic.oup.com/hcr/article-abstract/33/3/271/4210718?redirectedFrom=fulltext&login=false 

[2] Number based on description at: https://www.leagueoflegends.com/en-us/champions/ (accessed 23 June 2024) 

[3] Do players communicate differently depending on the champion played? Exploring the Proteus effect in League of Legends: https://www.sciencedirect.com/science/article/abs/pii/S0040162522000889

[4] Flexing Muscles in Virtual Reality: Effects of Avatars’ Muscular Appearance on Physical Performance: https://www.academia.edu/77237473/Flexing_Muscles_in_Virtual_Reality_Effects_of_Avatars_Muscular_Appearance_on_Physical_Performance 

[5] Avatar-mediated creativity: When embodying inventors makes engineers more creative: https://www.sciencedirect.com/science/article/pii/S0747563216301856 

[6] Patient-embodied virtual reality as a learning tool for therapeutic communication skills among anaesthesiologists: A phenomenological study: https://www.sciencedirect.com/science/article/pii/S0738399123001696 

Application of Machine Learning in Data Lakes

In the digital age, there is a growing need for advanced technologies. It means not only for collecting but especially for analysing data. Companies are accumulating increasing amounts of different information that can improve their efficiency and innovation. Data Engineering offered by BFirst.Tech can play a key role in the process of using data for the benefit of a company. This is an area of sustainable products for effective information management and processing. The article presents one of the opportunities offered by the Data Engineering area. For example the integration of Machine Learning with Data Lakes. 

Data Engineering – an area of ​​sustainable products dedicated to collecting, analysing and aggregating data 

Data engineering is a process of designing and implementing systems for the effective collection, storage and processing of large sets of data. This supports the accumulation of information such as website traffic analysis, data from IoT sensors or consumer purchasing trends. Firstly, the task of data engineering is to ensure that information is skillfully collected. What is more, it is stored but also easily accessible and ready for analysis. Data can be effectively stored in Lakes, Data Storages and Data Warehouses. Such integrated data sources can be used to create analyses or feed artificial intelligence engines, which ensures comprehensive use of the collected information (see the detailed description of the Data Engineering area (img 1)). 

Data Engineering

img 1 – Data Engineering

Data lakes used for storing sets of information    

Data lakes enable storing a huge amount of raw data in its original, unprocessed format. Thanks to the possibilities offered by Data Engineering, data lakes are capable of accepting and integrating data from a wide variety of sources. For instance, text documents, images, IoT sensor data. It makes it possible to analyse and utilise complex sets of information in one place. The flexibility of data lakes and their ability to integrate diverse types of data make them extremely valuable to organisations facing the challenge of managing and analysing dynamically changing data sets. Unlike Data Warehouses, Data Lakes offer greater versatility in handling a variety of data types, made possible by advanced data processing and management techniques used in Data Engineering. However, that versatility also raises challenges in the area of storing and managing such complex sets of data. It requires data engineers to constantly adapt and implement innovative approaches.[1, 2] 

Information processing in data lakes and the application of machine learning   

The increasing volume of stored data and its diversity pose a challenge in the area of effective processing and analysis. Traditional methods are often unable to keep up with the growing complexity. What is more, they lead to delays and limitations in accessing key information. Machine Learning, supported by innovations in Data Engineering, can significantly improve those processes. Using extensive data sets, Machine Learning algorithms identify patterns, predict outcomes and automate decisions. Thanks to the integration with Data Lakes (img 2), they can work with a variety of data types. That is to say, structured to unstructured, enabling more complex analyses. Such comprehensiveness enables a more thorough understanding and use of data that would be inaccessible in traditional systems.

Applying Machine Learning to Data Lakes enables deeper analysis and more efficient processing. It facilitates the process by advanced Data Engineering tools and strategies. This enables organisations to transform great amounts of raw data into useful and valuable information. That is important for increasing their operational and strategic efficiency. Moreover, the use of Machine Learning supports the interpretation of collected data and contributes to more informed business decision-making. As a result, companies can adapt to market demands more dynamically, and create data-driven strategies in an innovative way. 

Data Lake

img 2 – Data Lakes

Fundamentals of Machine Learning, key techniques and their application  

In this paragraph, let’s discuss Machine Learning. as an integral part of the so-called artificial intelligence. It enables information systems to learn and develop based on data. Different types of learning are distinguished in that field: Supervised Learning, Unsupervised Learning and Reinforcement Learning. In Supervised Learning, each type of data is assigned a label or score that allows machines to learn. For example, to recognise patterns and create forecasts. That type of learning is used in image classification or financial forecasting, inter alia. In turn, Unsupervised Learning, in the case of which unlabeled data is used, focuses on finding hidden patterns and is useful in tasks such as grouping elements or detecting anomalies. Reinforcement Learning is based on a system of rewards and punishments. It helps machines to optimise their actions under dynamically changing conditions, e.g. games or automation. [3]

In terms of algorithms, neural networks are excellent for recognising patterns in complex data, such as images or sound. It also forms the basis of many advanced AI systems. Decision trees are used for classification and predictive analysis, for example in recommendation systems or sales forecasting. Each of those algorithms has unique applications and can be tailored to the specific needs of a task or problem. As a result, it makes Machine Learning a versatile tool in the world of data. 

Examples of applications of Machine Learning 

The application of Machine Learning to Data Lakes opens up a wide spectrum of possibilities. We can enumerate from anomaly detection, through personalisation of offers, to optimisation of supply chains. In the financial sector, such algorithms effectively analyse transaction patterns and identify anomalies or potential fraud in real time. That is crucial in preventing financial fraud. In retail and marketing, Machine Learning enables the personalisation of offers to customers. It happens by analysing purchase behaviour and preferences, increasing customer satisfaction and sales efficiency. [4] In industry, the algorithms contribute to the optimisation of supply chains by analysing data from various sources – as weather forecasts or market trends. It helps predicting demand and manage inventory and logistics [5].

They can also be used for pre-design or product optimisation. Another interesting application of Machine Learning in Data Lakes is image analysis. Machine Learning algorithms are able to process and analyse large sets of images and pictures. They are used in fields such as medical diagnostics, where they can help detect and classify lesions in radiological images, or in security systems, where camera image analysis can be used to identify and track objects or people.  

 CONCLUSIONS  

The article emphasises developments in the field of data analytics, highlighting how Machine Learning, Data Lakes and data engineering influence the way organisations process and use information. Introducing such technologies into business improves existing processes and opens the way to new opportunities. The Data Engineering area introduces modernisation into information processing, characterised by greater precision, deeper conclusions and faster decision-making.  That progress emphasises the growing value of Data Engineering in the modern business world, which is an important factor in adapting to dynamic market changes and creating data-driven strategies. 

References 

[1] https://bfirst.tech/data-engineering/ 

[2] https://www.netsuite.com/portal/resource/articles/data-warehouse/data-lake.shtml 

[3] https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained 

[4] https://www.tableau.com/learn/articles/machine-learning-examples 

[5]https://neptune.ai/blog/use-cases-algorithms-tools-and-example-implementations-of-machine-learning-in-supply-chain 

New developments in desktop computers

Today’s technology market is thriving with desktop computers. Technology companies are trying to differentiate their products by incorporating innovative features into them. Recently, the Mac M1 Ultra has received a lot of recognition.

The new computer from Apple, stands out above all for its size and portability. Unveiled at the beginning of March, the product is a full-fledged desktop enclosed in a case measuring 197 x 197 x 95 mm. Comparing this to Nvidia’s RTX series graphics cards, for instance the Gigabyte GeForce RTX 3090 Ti 24GB GDDR6X, where the GPU alone measures 331 x 150 x 70 mm, it appears that one gets a whole computer the size of a graphics card. [4]

Apple M1 Ultra  - front panel
Fig. 1 – Apple M1 Ultra  – front panel [5]

Difference in construction

Cores are the physical parts of a CPU where processes and calculations take place; the more cores the faster the computer will run. The technological process expressed in nm represents the gate size of the transistor and translates into the power requirements and heat generated by the CPU. So the smaller the value expressed in nm, the more efficient the CPU.

The M1 Ultra CPU has 20 cores and the same number of threads, and is made with 5nm technology. [4][6] In comparison, AMD offers a maximum of 16 cores and 32 threads in 7nm technology [7] (AMD’s new ZEN4 series CPUs are expected to have 5nm technology, but we do not know the exact specifications at this point [3]) and Intel 16 cores and 32 threads in 14nm technology [8]. In view of the above, in theory, the Apple product has a significant advantage over the competition in terms of single thread performance. [Fig. 2]

Performance of the new Apple computer

According to the manufacturer’s claims, the GPU from Apple was supposed to outperform the best graphics card available at the time, the RTX 3090.

Graph of the CPU performance against the amount of power consumed
Fig. 2 – Graph of the CPU performance against the amount of power consumed [9]. Graph shown by Apple during the presentation of a new product

The integrated graphics card was supposed to deliver better performance while consuming over 200W less power. [Fig. 3] After the release, however, users quickly checked the manufacturer’s assurances and found that the RTX significantly outperformed Apple’s product in benchmark tests.

Graph of graphics card performance against the amount of power consumed
Fig. 3 – Graph of graphics card performance against the amount of power consumed [9]. Graph shown by Apple during the presentation of a new product. Compared to RTX 3090

The problem is that these benchmarks mostly use software not optimised for the Mac OS, so that the Apple product does not use all of its power. In tests that use the full GPU power, the M1 Ultra performs very similarly to its dedicated rival. Unfortunately, not all applications are written for Apple’s OS, severely limiting the applications in which we will use the full power of the computer.[10]

The graph below shows a comparison of the frame rate in “Shadow of the Tomb Raider” from 2018. [Fig. 4] The more frames, the smoother the image.  [2]

The frame rate of the Tomb Raider series game (the more the better)
Fig. 4 – The frame rate of the Tomb Raider series game (the more the better) [2].

Power consumption of the new Mac Studio M1 Ultra compared to standard PCs

Despite its high computer performance, Apple’s new product is very energy-efficient. The manufacturer states that its maximum continuous power consumption is 370W. Standard PCs with modern components do not go below 500W and the recommended power for hardware with the best parts is 1000W [Table 1] ( Nvidia GeForce RTX 3090 Ti + AMD R7/9 or Intel i7/9 ).  

Intel i5
AMD R5
Intel i7
AMD R7
Intel i9 K
AMD R9
NVIDIA RTX 3090 Ti850W1000W1000W
NVIDIA RTX 3090 750W850W850W
NVIDIA RTX 3080 Ti750W850W850W
NVIDIA RTX 3080 750W850W850W
NVIDIA RTX 3070 Ti750W850W850W
NVIDIA RTX 3070 650W750W750W
Lower graphic cards650W650W650W
Table 1 – Table of recommended PSU wattage depending on the CPU and graphics card used. AMD and Intel CPUs in the columns, NVIDIA RTX series graphics cards in the rows. [1]

This means significantly lower maintenance costs for such a computer. Assuming that our computer works 8 hours a day and an average kWh cost of PLN 0.77, we obtain a saving of PLN 1,500 a year. In countries that are not powered by green energy, this also means less pollution.

Apple’s product problems

Products from Apple have dedicated software, which means better compatibility with the hardware and translates into better performance, but it also means that a lot of software not written for Mac OS cannot fully exploit the potential of the M1 Ultra. The product does not allow the use of two operating systems or the independent installation of Windows/Linux. So it turns out that what allows the M1 Ultra to perform so well in some conditions is also the reason why it is unable to compete in performance in other programs. [10]

Conclusion

The Apple M1 Ultra is a powerful computer in a small box. Its 5nm technology provides the best energy efficiency among products currently available on the market. However, due to its low compatibility and high price, it will not replace standard computers. To get maximum performance, dedicated software for the Apple operating system is required. When deciding on this computer, one must keep this in mind. For this reason, despite its many advantages, it is more of a product for professional graphic designers, musicians or video editors.

References

[1] https://www.msi.com/blog/we-suggest-80-plus-gold-1000w-and-above-psus-for-nvidia-geforce-rtx-3090-Ti

[2] https://nano.komputronik.pl/n/apple-m1-ultra/

[3] https://www.tomshardware.com/news/amd-zen-4-ryzen-7000-release-date-specifications-pricing-benchmarks-all-we-know-specs

[4] https://www.x-kom.pl/p/730594-nettop-mini-pc-apple-mac-studio-m1-ultra-128gb-2tb-mac-os.html

[5] https://dailyweb.pl/apple-prezentuje-kosmicznie-wydajny-mac-studio-z-nowym-procesorem-m1-ultra/

[6] https://geex.x-kom.pl/wiadomosci/apple-m1-ultra-specyfikacja-wydajnosc/

[7] https://www.amd.com/pl/partner/ryzen-5000-series-desktop

[8] https://www.cpu-monkey.com/en/

[9] https://www.apple.com/pl/newsroom/2022/03/apple-unveils-m1-ultra-the-worlds-most-powerful-chip-for-a-personal-computer/

[10] https://youtu.be/kVZKWjlquAU?t=301

ANC — Financial Aspects

Today’s realities are making people increasingly inclined to discuss finances. This applies to both private household budgets and major, global-level investment projects. There is no denying the fact that attention to finances has resulted in the development of innovative methods of analysing them. These range from simple applications that allow us to monitor our day-to-day expenses to huge accounting and bookkeeping systems that support global corporations. The discussions about money also pertain to investment projects in a broader sense. They are very often associated with the implementation of modern technologies, which are implicitly intended to bring even greater benefits, with the final result being greater profit. Yet how do you define profit? And is it really the most crucial factor in today’s perception of business? Finally, how can active noise reduction affect productivity and profit?

What is profit?

The literature explains that “profit is the excess of revenue over costs” [1]. In other words, profit is a positive financial result. Colloquially speaking, it is a state in which you sell more than you spend. This is certainly a desirable phenomenon since, after all, the idea is for a company to be profitable. Profit serves as the basis for further investment projects, enabling the company to continue to meet customer needs. Speaking of profit, one can distinguish several types of it [2]:

  1. Gross profit, i.e. the difference between net sales revenue and costs of products sold. It allows you to see how a unit of your product translates into the bottom line. This is particularly vital for manufacturing companies, which often seek improvements that will ultimately allow them to maintain economies of scale.
  2. Net profit, i.e. the surplus that remains once all costs have been deducted. In balance sheet terms, this is the difference between total costs and sales revenue. In today’s world, it is frequently construed as a factor that indicates the financial health of an enterprise.
  3. Operating profit, i.e. a specific type of profit that is focused solely on the company’s result in its core business area. It is very often listed as EBIT in the profit and loss account.

Profit vs productivity

In this sense, productivity involves ensuring that the work does not harm the workers’ lives or health over the long term. The general classification of the Central Institute for Labour Protection lists such harmful factors as [3]:

  • noise and mechanical vibration,
  • mechanical factors,
  • chemical agents and dust,
  • musculoskeletal stress,
  • stress,
  • lighting,
  • optical radiation,
  • electricity.

The classification also lists thermal loads, electromagnetic fields, biological agents and explosion and fire hazards. Yet the most common problem is that of industrial noise and vibrations that the human ear is often unable to pick up at all. It has often been the case that concentration decreased while sleepiness levels increased while working in a perpetually noisy environment. Hence, one may conclude that even something as inconspicuous as noise and vibration generates considerable costs for the entrepreneur, especially in terms of unit costs (for mass production). As such, it is crucial to take action in noise reduction. If you would like to learn more about how to combat noise pollution, click here to sign up for training.

How do you avoid incurring costs?

Today’s R&D companies, engineers and specialists thoroughly research and improve production systems, which allows them to develop solutions that eliminate even the most intractable human performance problems. Awareness of better employee care is deepening year on year. Hence the artificial intelligence boom, which is aimed at creating solutions and systems that facilitate human work. However, such solutions require a considerable investment, and as such, financial engineers make every effort to optimise their costs.

Step 1 — Familiarise yourself with the performance characteristics of the factory’s production system in production and economic terms.

Each production process has unique performance and characteristics, which affect production results to some extent. To be measurable, these processes must be examined using dedicated indicators beforehand. It is worth determining process performance at the production and economic levels based on the knowledge of the process and the data that is determined using such indicators. The production performance determines the level of productivity of the human-machine team, while the economic performance examines the productivity issue from a profit or loss perspective. Production bottlenecks that determine process efficiency are often identified at this stage. It is worthwhile to report on the status of production efficiency at this point.

Step 2 — Determine the technical and economic assumptions

The process performance characteristics report serves as the basis for setting the assumptions. It allows you to identify the least and most efficient processes. The identification of assumptions is intended to draw up current objectives for managers of specific processes. In the technical dimension, the assumptions typically relate to the optimisation of production bottlenecks. In the economic dimension, it is worth focusing your attention on cost optimisation, resulting from the cost accounting in management accounting. Technical and economic assumptions serve as the basis for implementing innovative solutions. They make it possible to greenlight the changes that need to happen to make a process viable.

Step 3 — Revenue and capital expenditure forecasts vs. active noise reduction

Afterwards, you must carry out predictive testing. It aims to examine the distribution over time of the revenue and capital expenditure incurred for both the implementation and subsequent operation of the system in an industrial setting.

Forecasted expenditure with ANC
Figure 1 Forecast expenditure in the 2017-2027 period
Forecasted revenue with ANC
Figure 2 Forecast revenue in the 2017-2027 period

From an economic standpoint, the implementation of an active noise reduction system can calm income fluctuations over time. The trend based on the analysis of the previous periods clearly shows cyclicality and a linear trend in terms of both increases and decreases. Stabilisation correlates with the implementation of the system described. This may involve a permanent additional increase in the capacity associated with the system’s implementation into the production process. Hence the conclusion that improvements in productive efficiency result in income stabilisation over time. On the other hand, the implementation of the system requires higher expenditures. The expenditure level is trending downwards year on year, however.

This data allows you to calculate basic measures of investment profitability. At this point, you can also carry out introductory calculations to determine income and expenditure at a single point in time. This allows you to calculate the discount rate and forecast future investment periods [1].

Step 4 — Evaluating investment project effectiveness using static methods

Calculating measures of investment profitability allows you to see if what you wish to put your capital into will give you adequate and satisfactory returns. When facing significant competition, investing in such solutions is a must. Of course, the decisions taken can tip the balance in two ways. Among the many positive aspects of investing are increased profits, reduced costs and a stronger market position. Yet there is also the other side of the coin. Bad decisions, typically based on ill-prepared analyses or made with no analyses at all, often involve lost profits and may force you to incur opportunity costs as well. Even more often, ill-considered investment projects result in a decline in the company’s value. In static terms, we are talking about the following indicators:

  • Annual rate of return,
  • Accounting rate of return,
  • Payback period.

In the present case, i.e. the implementation of an active noise reduction system, we are talking about an annual and accounting rate of return of approximately 200% of the value. The payback period settles at less than a year. This is due to the large disparity between the expenses incurred in implementing the system and the benefits of its implementation. However, to be completely sure of implementation, the Net Present Value (NPV) and Internal Rate of Return (IRR) still need to be calculated in the first place. The NPV and IRR determine the performance of the investment project over the subsequent periods studied.

Step 5 — Evaluating effectiveness using dynamic methods

In this section, you must consider the investment project’s efficiency and the impact that this efficiency has on its future value. Therefore, the following indicators must be calculated:

  • Net Present Value (NPV),
  • Net Present Value Ratio (NPVR),
  • Internal Rate of Return (IRR),

In pursuing a policy of introducing innovation in industrial companies, companies face the challenge of maximising performance indicators. Considering the correlation between the possibilities of applying active noise reduction methods that improve the working conditions, thus influencing employee performance, one may conclude that the improvement in work productivity is reflected in the financial results, which has a direct impact on the assessment of the effectiveness of such a project. Despite the high initial expenditures, this solution offers long-term benefits by improving production stability.

Is it worth carrying out initial calculations of investment returns?

To put it briefly: yes, it is. They prove helpful in decision-making processes. They represent an initial screening for decision-makers — a pre-selection of profitable and unprofitable investment projects. At that point, the management is able to establish the projected profitability even down to the operational level of the business. Reacting to productivity losses allows bosses to identify escaping revenue streams and react earlier to potential technological innovations. A preliminary assessment of cost-effectiveness is a helpful tool for making accurate and objective decisions.

References

[1] D.Begg, G.Vernasca, S.Fischer „Mikroekonomia” PWE Warszawa 2011
[2] mfiles.pl/pl/index.php/Zysk

[3] Felis P., 2005: Metody i procedury oceny efektywności inwestycji rzeczowych przedsiębiorstw. Wydawnictwo Wyższej Szkoły Ekonomiczno-Informatycznej. Warszawa.

Digital image processing

Signal processing accompanies us every day. All stimuli (signals) received from the world around sound, light, or temperature are processed into electrical signals, which are later sent to the brain. In the brain, the analysis and interpretation of the received signal takes place. As a result, we get information from the signal (e.g. we can recognize the shape of an object, we feel the heat, etc.).

Digital signal processing (DSP) works similarly. In this case, the analog signal is converted into a digital signal by an analog-digital converter. Then, using the digital computer, received signals are being processed. The DSP systems also use computer peripheral devices equipped with signal processors which allow processing of signals in real-time. Sometimes, it is necessary to re-convert the signal to an analog form (e.g. to control a device). For this purpose, digital-to-analog converters are used.

Digital signal processing has a wide range of applications. It can be used to process sound, speech recognition, or image processing. The last issue will be the subject of this article. We will deeply discuss the basic operation of convolutional filtration in digital image processing.

What is image processing?

Simply speaking, digital image processing consists in transforming the input image into an output image. The aim of this process is to select information – choosing the most important (e.g. shape) and eliminating unnecessary (e.g. noise). The digital image process features a variety of different image operations such as:

  • filtration,
  • thresholding,
  • segmentation,
  • geometry transformation,
  • coding,
  • compression.

  As we mentioned before, in this article we will focus on image filtration.

Convolutional filtration

Both in the one-dimensional domain (for audio signals) and also for two dimensions, there are specific tools for operating on signals – in this case on images. One of such tools is filtration. It consists of some mathematical operations on pixels which as a result give us a new image. Filtration is commonly used to improve image quality or to extract important features from the image.

The basic operation in the filtration method is the 2D convolutional function. It allows applying of image transformations using appropriate filters in a form of matrix coefficients. The use of filters consists of calculating a point’s new value based on the brightness values of points in the closest neighborhood. Such so-called masks containing pixel weights based on the closest pixels values are used in calculations. The usual sizes of masks are 3×3, 5×5, and 7×7. The process of image and filter convolution has been shown below.

Assuming that the image is represented by a 5×5 matrix which contains color values and the filter is represented by a 3×3 matrix, the image was modified by joining these matrices.

The first thing to do is to transpose coefficients in a filter. We assume that the center of the filtration core h(0,0) is in the middle of the matrix, as shown in the picture below. Therefore (m,n) indexes denoting rows and columns of the filter matrix will be both negative and positive.

Image filtration diagram
Img 1 Filtration diagram

Considering the filter matrix (the blue one) as inverted vertically and horizontally we can perform filtration operations. They start by placing the h(0,0) → h(m,n) element of the blue matrix over the s(-2,-2) → s(i,j) element of the yellow matrix (the image). Then we multiply the overlapping values of both matrices and add them up. In this way, we have obtained the convolution result for the (-2,-2) cell of the output image. It is important to remember the normalization process, which allows us to adjust the brightness of a result by dividing it by the sum of filter coefficients. It prevents the output image brightness from being out of a scale of 0-255 (in the case of 8-bit image representation).

The next stages of this process are very similar. We move the center of the blue matrix over the (-2,-1) cell, then again multiply the overlapping values. Next, add them together and divide the result by the filter coefficients to get the result. We consider cells that go beyond the area of the matrix s (i,j) to be undefined. Therefore, the values do not exist in these places, so we do not multiply them.

The usage of convolutional filtration

Depending on the type of filter, we can distinguish several applications of convolutional filtration. Low-pass filters are used to remove noise from images, while high-pass filters are used to sharpen or emphasize edges. To illustrate the effects of different filters, we will apply them to the real image. The picture below is a “jpg” format and was loaded in Octave software as an MxNx3 pixel matrix.

Original input image
Img 2 Original Input Image

Gaussian blur

To blur the image we need to use a convolutional function as well as the properly prepared filter. One of the most commonly used low-pass filters is the gaussian filter. It allows you to lower the sharpness of the image but also it is used to reduce the noise from it.

For this article, a 29×29 matrix based on Gaussian function with a standard deviation of 5 was generated. The normal distribution gives weights to the surrounding pixels during the process of convolution. A low-pass filter suppresses high-frequency image elements while passing low-frequency elements. The output image compared to the original one is blurry, and the noises are significantly reduced.

Blurred input image
Img 3 Blurred input image

Sharpen

We can make the image blurry but there is also a way to make it sharpen. To make it happen a suitable high-pass filter should be used. The filter passes through and amplifies image elements that are characterized by high frequency e.g. noise or edges. However, low-frequency elements are suppressed. By using this filter, the original image is sharpened – it can be easily noticed especially in the arm area.

Sharpened input image
Img 4 Sharpened input image

Edges detection

Another possible image process is called edge detection. Shifting and subtracting filters are used to detect edges on the image. They work by shifting the image and subtracting the original image from its copy. As a result of this procedure, edges are being detected, as shown in the picture below.

Edge detection
Img 5 Edge detection

BFirst.Tech experience with image processing

Our company hires well-qualified staff with experience in the field of image processing. One of our original projects was called TIRS, i.e. a platform which diagnoses areas in the human body that might be affected by cancerous cells. It works based on the use of advanced image processing algorithms and artificial intelligence. It automatically detect cancerous areas with the use of medical imaging data obtained from tomography and magnetic resonance imaging. This platform finds its use in clinics and hospitals.

Our other project, which also requires the usage of image processing, is called the Virdiamed platform. It was created in cooperation with Rehasport Clinic. This platform allows a 3D reconstruction of CT and MRI data and also allows the viewing of 3D data in a web browser. If you want to read more about our projects, click here.

Digital signal processing, including image processing, is a field of technology with a wide range of application possibilities, and its popularity is constantly growing.  Non-stopping technological progress means that this field of technology is also constantly developing. Moreover, any technologies used every day are based on signal processing, which is why it is certain that in the future the importance of DSP will continue to grow.

References

[1] Leonowicz Z.: „Praktyczna realizacja systemów DSP”

[2] http://www.algorytm.org/przetwarzanie-obrazow/filtrowanie-obrazow.html

Smart Manufacturing

New technologies are finding their place in many areas of life. One of these is an industry, where advanced technologies have been used for years and work very well for factories. The implementation of smart solutions based on advanced IT technologies into manufacturing companies has had a significant impact on technological development and improved innovation. One of them is Smart Manufacturing, which helps industrial optimisation by drawing insights from data generated in manufacturing processes.

What is meant by Smart Manufacturing?

Smart Manufacturing is a concept that encompasses the full integration of systems with collaborative production units that are able to react in real time and adapt to changing environmental conditions, making it possible to meet the requirements within the supply chain. The implementation of an intelligent manufacturing system supports the optimisation of production processes. At the same time, it contributes to increased profits for industrial companies.

The concept of Smart Manufacturing is closely related to concepts such as artificial intelligence (AI), the Industrial Internet of Things (IIoT) or cloud computing. What these three concepts have in common is data. The idea behind smart manufacturing is that the information it contains is available whenever necessary and in its most useful form. It is data analysis that has the greatest impact on optimising manufacturing processes and makes them more efficient.

IIoT and industrial optimisation

The Industrial Internet of Things is nothing more than the application of IoT potential in the industrial sector. In the intelligent manufacturing model, people, machines and processes are interconnected through IT systems. Each machine features sensors that collect vital data about its operation. The system sends the data to the cloud, where it goes through and extensive analysis. With the information obtained from them, employees have an insight into the exact process flow. Thanks to that, they are able to anticipate failures and prevent them earlier, avoiding possible downtime. In addition, companies can examine trends in the data or run various simulations based on the data. The integration of all elements of the production process also makes it possible to remotely monitor its progress in real time, as well as to react to any irregularities. All of that would not be possible if it wasn’t for the IIoT solutions.

The rise of artificial intelligence

Another modern technological solution that is used in the smart manufacturing system is artificial intelligence. Over the last few years, we have seen a significant increase in the implementation of artificial intelligence solutions in manufacturing. This is now possible, precisely because of the deployment of IIoT devices, which provide huge amounts of data used by AI. Artificial intelligence algorithms analyse the data obtained and search for anomalies in the data. In addition, they enable automated decision-making based on the collected data. What’s more, artificial intelligence is able to predict problems before they occur and take appropriate steps to mitigate them.

Benefits for an enterprise

The implementation of Smart Manufacturing technology in factories can bring a number of benefits, primarily in the optimisation of manufacturing processes. With smart manufacturing, the efficiency can be improved tremendously. By having access to data on the entire process, it is possible to react quickly to any potential irregularities or adapt the process to current needs (greater flexibility). This allows companies to avoid many unwanted events, like breakdowns. This, in turn, has a positive effect on cost optimisation while also improving the company’s profitability. Yet another advantage is better use of machinery and equipment. By monitoring them on an ongoing basis, companies can control their wear and tear, anticipate breakdowns or plan downtime in a more efficient manner. This, in turn, improves productivity and even the quality of the manufactured products.

The use of SM also enables real-time data visualisation. That makes it possible to manage – as well as monitor – the process remotely. In addition, the virtual representation of the process provides an abundance of contextual information that is essential for process improvement. Based on the collected data, companies can also run various types of simulations. They can also anticipate trends or potential problems, which greatly improves forecasting. We should also mention here that implementing modern solutions such as Smart Manufacturing in a company increases their innovativeness. Thus, companies become more competitive and employees perceive them as a more attractive place to work.

Will automation put people out of work?

With technological developments and the increasingly widespread process automation, concerns regarding losing jobs have also become more apparent. Nothing could be further from the truth – people still play a pivotal role in the concept of smart manufacturing. The responsibility of employees to control processes or make critical decisions will therefore remain unchanged. Human-machine collaboration will thus make it possible to increase the operational efficiency of the smart enterprise.

So – the intention behind technological development is not to eliminate man, but rather to support him. What’s more, the combination of human experience and creativity with the ever-increasing capabilities of machines makes it possible to execute innovative ideas that can have a real impact on improving production efficiency. At the same time, the labour market will start to see an increased demand for new experts, ensuring that the manufacturing industry will not stop hiring people.

Intelligent manufacturing is an integral part of the fourth industrial revolution that is unfolding right before our eyes. The combination of machinery and IT systems has opened up new opportunities for industrial optimisation. This allows companies to realistically increase the efficiency of their processes, thereby helping to improve their profitability. BFirst.Tech offers an Industrial Optimisation service to analyse and communicate real-time data to all stakeholders with the contained information supporting critical decision-making and results in continuous process improvement.

References

[1] https://blog.marketresearch.com/the-top-7-things-to-know-about-smart-manufacturing

[2] https://przemyslprzyszlosci.gov.pl/7-krokow-do-zaawansowanej-produkcji-w-fabryce-przyszlosci/?gclid=EAIaIQobChMIl7rb1dnD7QIVFbd3Ch21kwojEAAYASAAEgKVcfD_BwE

[3] https://www.comarch.pl/erp/nowoczesne-zarzadzanie/numery-archiwalne/inteligentna-produkcja-jutra-zaczyna-sie-juz-dzis/

[4] https://elektrotechnikautomatyk.pl/artykuly/smart-factory-czyli-fabryka-przyszlosci

[5] https://www.thalesgroup.com/en/markets/digital-identity-and-security/iot/inspired/smart-manufacturing

[6] https://www.techtarget.com/iotagenda/definition/smart-manufacturing-SM

Space mining

Mining has accompanied mankind since the dawn of time. The coming years are likely to bring yet another milestone in its development: space mining.

Visions vs reality

Space mining has long fuelled the imagination of writers and screenwriters. They paint a picture of a struggle for resources between states, corporations and cultures inhabiting various regions of the universe. Some also speak of the risks faced by humanity due to possible encounters with other life forms. There is also the topic of extremely valuable minerals and other substances that are unknown on Earth but may be obtained in space.

At the moment, however, these visions are far from becoming a reality. We are in the process of cataloguing space resources, e.g. by making geological maps of the Moon [1] and observing asteroids [2]. Interestingly, the Moon is known to contain deposits of helium-3, which could be used as fuel for nuclear fusion reactions in the future. We expect to find deposits of many valuable minerals on asteroids. For example, nickel, iron, cobalt, water, nitrogen, hydrogen and ammonia available on the asteroid Ryugu. Our knowledge of space mineral resources is based mainly on astronomical observations. Direct analysis of surface rock samples for this purpose is much rarer, and analysis of subsurface rocks takes place incidentally. We can only fully analyse objects that have fallen on the Earth’s surface. As such, we should expect many more surprises to come.

First steps in space mining

What will the beginnings look like? As an activity closely linked to the economy, mining will start to develop to meet the needs of the market. Contrary to what we are used to on Earth, access to even basic resources like water can prove problematic in space.

Water

Water can be used directly by humans, and after hydrolysis, it can also serve as fuel. Thus, the implementation of NASA’s plans for a manned expedition to Mars, which will be preceded by human presence on the Moon[3], will result in a demand for water on and near the Moon. Yet another significant market for space water could be satellites. All the more so since estimations indicate that it will be more profitable to bring water from the Moon than from the Earth even into Low Earth Orbit (LEO).

For these reasons, industrial water extraction on the Moon has the potential to be the first manifestation of space mining. What could this look like in practice? Due to the intense ultraviolet radiation, any ice on the lunar surface would have decomposed into oxygen and hydrogen long ago. However, since the Moon lacks an atmosphere, these elements would inevitably escape into space. Ice is thus expected in permanently shaded areas, such as the bottoms of impact craters at the poles. One method of mining ice could be to evaporate it in a sealed and transparent tent. The energy could be sourced from the sun: one would only need to reflect sunlight using mirrors placed at the craters’ edges. At the North Pole, you can find places where the sun shines virtually all the time.

Regolith

One of the first rocks to be harvested on the Moon is likely to be regolith. Regolith is the dust that covers the Moon’s surface) While regolith may contain trace amounts of water, it is mainly hoped that it could be used for 3D printing. This would make it possible to quickly and cheaply construct all the facilities of the planned lunar base[4]. The facilities of such a base will need to protect humans against harmful cosmic radiation. And although regolith, compared to other materials, is not terribly efficient when used as radiation shielding (you need a thick layer of it), its advantage is that you do not need to ferry it from Earth.

Generally speaking, the ability to use local raw materials to the highest extent possible is an important factor in the success of space projects to create sustainable extraterrestrial habitats. Thus, optimising these processes is a key issue (click here to learn more about industry optimisation opportunities).

Asteroids

Another direction for space mining could be asteroids[5]. Scientists are considering capturing smaller asteroids and bringing them back to Earth. It is also possible to bring both smaller and larger asteroids into orbit and mine them there. Yet another option is to mine asteroids without moving them. Then only deliver the excavated material, perhaps after initial processing, to Earth.

Legal barriers

One usually overlooked issue is that apart from the obvious technological and financial constraints, the legal issues surrounding the commercial exploitation of space can prove to be a major barrier[6]. As of today, the four most important international space regulations are as follows[7]:

  • 1967 Outer Space Treaty,
  • 1968 Astronaut Rescue Agreement,
  • 1972 Convention on International Liability for Damage Caused by Space Objects, and
  • 1975 Convention on the Registration of Objects Launched into Outer Space.

They formulate the principles of the freedom and non-exclusivity of space. Also, there is description about the treatment of astronauts as envoys of mankind and the attribution of nationality to every object sent into space. They also regulate the issue of liability for damage caused by objects sent into space. However, they do not regulate the economic matters related to space exploitation. This gap is partly filled by the 1979 Moon Agreement. Although few states have ratified it (18), it aspires to create important customary norms for the coverage of space by legal provisions.

Among other things, it stipulates that the Moon’s natural resources are the common heritage of mankind and that neither the surface nor the resources of the Moon may become anyone’s property[8]. The world’s most affluent countries are reluctant to address its provisions. In particular, the US has officially announced that it does not intend to comply with the Agreement. Could it be that asteroid mining is set to become part of some kind of space colonialism?

References

[1] https://store.usgs.gov/filter-products?sort=relevance&scale=1%3A5%2C000%2C000&lq=moon

[2] http://www.asterank.com

[3] https://www.nasa.gov/topics/moon-to-mars

[4] https://all3dp.com/mit-autonomous-construction-rig-could-make-3d-printed-homes/

[5] http://space.alglobus.net/presentations/

[6] http://naukawpolsce.pap.pl/aktualnosci/news%2C81117%2Cdr-pawel-chyc-prawo-w-kosmosie-szczegolne-wyzwanie.html

[7] http://www.unoosa.org/oosa/en/ourwork/spacelaw/index.html

[8] https://kosmonauta.net/2011/09/uklad-ksiezycowy/

Generative Adversarial Networks

GANs, i.e. Generative Adversarial Networks, were first proposed by University of Montreal students Ian Goodfellow and others (including Yoshua Bengio) in 2014. In 2016, Facebook’s AI research director and New York University professor Yann LeCun called them “the most interesting idea in the last 10 years in machine learning”.

In order to understand what GANs are, it is necessary to compare them with discriminative algorithms like the simple Deep Neural Networks (DNNs). For an introduction to neural networks, please see this article. For more information on Convolutional Neural Networks, click here.

Let us use the issue of predicting whether a given email is spam or not as an example. The words that make up the body of the email are variables that determine one of two labels: “spam” and “non-spam”. The discriminator algorithm learns from the input vector (the words occurring in a given message are converted into a mathematical representation) to predict how much of a spam message the given email is, i.e. the output of the discriminator is the probability of the input data being spam, so it learns the relationship between the input and the output.

GANs do the exact opposite. Instead of predicting what the input data represents, they try to predict the data while having a label. More specifically, they are trying to answer the following question: assuming this email is spam, how likely is this data?

Even more precisely, the task of Generative Adversarial Networks is to solve the issue of generative modelling, which can be done in 2 ways (you always need high-resolution data, e.g. images or sound). The first possibility is density estimation — with access to numerous examples, you want to find the density probability function that describes them. The second approach is to create an algorithm that learns to generate data from the same training dataset (this is not about re-creating the same information but rather creating new information that could be such data).

What generative modelling approach do GANs use?

This approach can be likened to a game played by two agents. One is a generator that attempts to create data. The other is a discriminator that predicts whether this data is true or not. The generator’s goal is to cheat the other player. So, over time, as both get better at their task, it is forced to generate data that is as similar as possible to the training data.

What does the learning process look like?

The first agent, i.e. the discriminator (it is some differentiable function D, usually a neural network), gets a piece of the training data as input (e.g. a photo of a face). This picture is then called  (it is simply the name of the model input) and the goal is for D(x) to be as close to 1 as possible — meaning that x is a true example.

The second agent, i.e. the generator (differentiable function G; it is usually a neural network as well), receives white noise z (random values that allow it to generate a variety of plausible images) as input. Then, applying the function G to the noise z, one obtains x (in other words, G(z) = x). We hope that sample x will be quite similar to the original training data but will have some problems — such as noticeable noise — that may allow the discriminator to recognise it as a fake example. The next step is to apply the discriminant function D to the fake sample x from the generator. At this point, the goal of D is to make D(G(z)) as close to zero as possible, whereas the goal of G is for D(G(z)) to be close to one.

This is akin to the struggle between money counterfeiters and the police. The police want the public to be able to use real banknotes without the possibility of being cheated, as well as to detect counterfeit ones and remove them from circulation, and punish the criminals. At the same time, counterfeiters want to fool the police and use the money they have created. Consequently, both the police and the criminals are learning to do their jobs better and better.

Assuming that the hypothetical capabilities of the police and the counterfeiters — the discriminator and the generator — are unlimited, then the equilibrium point of this game is as follows: the generator has learned to produce perfect fake data that is indistinguishable from real data, and as such, the discriminator’s score is always 0.5 — it cannot tell if a sample is true or not.

What are the uses of GANs?

GANs are used extensively in image-related operations. This is not their only application, however, as they can be used for any type of data.

Style Transfer by CycleGAN
Figure 1 Style Transfer carried out by CycleGAN

For example, the DiscoGAN network can transfer a style or design from one domain to another (e.g. transform a handbag design into a shoe design). It can also generate a plausible image from an item’s sketch (many other networks can do this, too, e.g. Pix2Pix). Known as Style Transfer, this is one of the more common uses of GANs. Other examples of this application include the CycleGAN network, which can transform an ordinary photograph into a painting reminiscent of artworks by Van Gogh, Monet, etc. GANs also enable the generation of images based on a description (StackGAN network) and can even be used to enhance image resolution (SRGAN network).

Useful resources

[1] Goodfellow I., Improved Techniques for Training GANs, https://arxiv.org/abs/1606.03498
2016, https://arxiv.org/pdf/1609.04468.pdf

[2] Chintala S., How to train a GAN, https://github.com/soumith/ganhacks

[3] White T., Sampling Generative Networks, School of Design, Victoria University of Wellington, Wellington

[4] LeCun Y., Mathieu M., Zhao J., Energy-based Generative Adversarial Networks, Department of Computer Science, New York University, Facebook Artificial Intelligence Research, 2016, https://arxiv.org/pdf/1609.03126v2.pdf

References

[1] Goodfellow I., Tutorial: Generative Adversarial Networks [online], “NIPS”, 2016, https://arxiv.org/pdf/1701.00160.pdf
[2] Skymind, A Beginner’s Guide to Generative Adversarial Networks (GANs) [online], San Francisco, Skymind, accessed on: 31 May 2019
[3] Goodfellow, Ian, Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron, and Bengio, Yoshua. Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680, 2014
[4] LeCun, Y., What are some recent and potentially upcoming breakthroughs in deep learning?, “Quora”, 2016, accessed on: 31 May 2019, https://www.quora.com/What-are-some-recent-and-potentially-upcoming-breakthroughs-in-deep-learning
[5] Kim T., DiscoGAN in PyTorch, accessed on: 31 May 2019, https://github.com/carpedm20/DiscoGAN-pytorch