ANC — Financial Aspects

Today’s realities are making people increasingly inclined to discuss finances. This applies to both private household budgets and major, global-level investment projects. There is no denying the fact that attention to finances has resulted in the development of innovative methods of analysing them. These range from simple applications that allow us to monitor our day-to-day expenses to huge accounting and bookkeeping systems that support global corporations. The discussions about money also pertain to investment projects in a broader sense. They are very often associated with the implementation of modern technologies, which are implicitly intended to bring even greater benefits, with the final result being greater profit. Yet how do you define profit? And is it really the most crucial factor in today’s perception of business? Finally, how can active noise reduction affect productivity and profit?

What is profit?

The literature explains that “profit is the excess of revenue over costs” [1]. In other words, profit is a positive financial result. Colloquially speaking, it is a state in which you sell more than you spend. This is certainly a desirable phenomenon since, after all, the idea is for a company to be profitable. Profit serves as the basis for further investment projects, enabling the company to continue to meet customer needs. Speaking of profit, one can distinguish several types of it [2]:

  1. Gross profit, i.e. the difference between net sales revenue and costs of products sold. It allows you to see how a unit of your product translates into the bottom line. This is particularly vital for manufacturing companies, which often seek improvements that will ultimately allow them to maintain economies of scale.
  2. Net profit, i.e. the surplus that remains once all costs have been deducted. In balance sheet terms, this is the difference between total costs and sales revenue. In today’s world, it is frequently construed as a factor that indicates the financial health of an enterprise.
  3. Operating profit, i.e. a specific type of profit that is focused solely on the company’s result in its core business area. It is very often listed as EBIT in the profit and loss account.

Profit vs productivity

In this sense, productivity involves ensuring that the work does not harm the workers’ lives or health over the long term. The general classification of the Central Institute for Labour Protection lists such harmful factors as [3]:

  • noise and mechanical vibration,
  • mechanical factors,
  • chemical agents and dust,
  • musculoskeletal stress,
  • stress,
  • lighting,
  • optical radiation,
  • electricity.

The classification also lists thermal loads, electromagnetic fields, biological agents and explosion and fire hazards. Yet the most common problem is that of industrial noise and vibrations that the human ear is often unable to pick up at all. It has often been the case that concentration decreased while sleepiness levels increased while working in a perpetually noisy environment. Hence, one may conclude that even something as inconspicuous as noise and vibration generates considerable costs for the entrepreneur, especially in terms of unit costs (for mass production). As such, it is crucial to take action in noise reduction. If you would like to learn more about how to combat noise pollution, click here to sign up for training.

How do you avoid incurring costs?

Today’s R&D companies, engineers and specialists thoroughly research and improve production systems, which allows them to develop solutions that eliminate even the most intractable human performance problems. Awareness of better employee care is deepening year on year. Hence the artificial intelligence boom, which is aimed at creating solutions and systems that facilitate human work. However, such solutions require a considerable investment, and as such, financial engineers make every effort to optimise their costs.

Step 1 — Familiarise yourself with the performance characteristics of the factory’s production system in production and economic terms.

Each production process has unique performance and characteristics, which affect production results to some extent. To be measurable, these processes must be examined using dedicated indicators beforehand. It is worth determining process performance at the production and economic levels based on the knowledge of the process and the data that is determined using such indicators. The production performance determines the level of productivity of the human-machine team, while the economic performance examines the productivity issue from a profit or loss perspective. Production bottlenecks that determine process efficiency are often identified at this stage. It is worthwhile to report on the status of production efficiency at this point.

Step 2 — Determine the technical and economic assumptions

The process performance characteristics report serves as the basis for setting the assumptions. It allows you to identify the least and most efficient processes. The identification of assumptions is intended to draw up current objectives for managers of specific processes. In the technical dimension, the assumptions typically relate to the optimisation of production bottlenecks. In the economic dimension, it is worth focusing your attention on cost optimisation, resulting from the cost accounting in management accounting. Technical and economic assumptions serve as the basis for implementing innovative solutions. They make it possible to greenlight the changes that need to happen to make a process viable.

Step 3 — Revenue and capital expenditure forecasts vs. active noise reduction

Afterwards, you must carry out predictive testing. It aims to examine the distribution over time of the revenue and capital expenditure incurred for both the implementation and subsequent operation of the system in an industrial setting.

Forecasted expenditure with ANC
Figure 1 Forecast expenditure in the 2017-2027 period
Forecasted revenue with ANC
Figure 2 Forecast revenue in the 2017-2027 period

From an economic standpoint, the implementation of an active noise reduction system can calm income fluctuations over time. The trend based on the analysis of the previous periods clearly shows cyclicality and a linear trend in terms of both increases and decreases. Stabilisation correlates with the implementation of the system described. This may involve a permanent additional increase in the capacity associated with the system’s implementation into the production process. Hence the conclusion that improvements in productive efficiency result in income stabilisation over time. On the other hand, the implementation of the system requires higher expenditures. The expenditure level is trending downwards year on year, however.

This data allows you to calculate basic measures of investment profitability. At this point, you can also carry out introductory calculations to determine income and expenditure at a single point in time. This allows you to calculate the discount rate and forecast future investment periods [1].

Step 4 — Evaluating investment project effectiveness using static methods

Calculating measures of investment profitability allows you to see if what you wish to put your capital into will give you adequate and satisfactory returns. When facing significant competition, investing in such solutions is a must. Of course, the decisions taken can tip the balance in two ways. Among the many positive aspects of investing are increased profits, reduced costs and a stronger market position. Yet there is also the other side of the coin. Bad decisions, typically based on ill-prepared analyses or made with no analyses at all, often involve lost profits and may force you to incur opportunity costs as well. Even more often, ill-considered investment projects result in a decline in the company’s value. In static terms, we are talking about the following indicators:

  • Annual rate of return,
  • Accounting rate of return,
  • Payback period.

In the present case, i.e. the implementation of an active noise reduction system, we are talking about an annual and accounting rate of return of approximately 200% of the value. The payback period settles at less than a year. This is due to the large disparity between the expenses incurred in implementing the system and the benefits of its implementation. However, to be completely sure of implementation, the Net Present Value (NPV) and Internal Rate of Return (IRR) still need to be calculated in the first place. The NPV and IRR determine the performance of the investment project over the subsequent periods studied.

Step 5 — Evaluating effectiveness using dynamic methods

In this section, you must consider the investment project’s efficiency and the impact that this efficiency has on its future value. Therefore, the following indicators must be calculated:

  • Net Present Value (NPV),
  • Net Present Value Ratio (NPVR),
  • Internal Rate of Return (IRR),

In pursuing a policy of introducing innovation in industrial companies, companies face the challenge of maximising performance indicators. Considering the correlation between the possibilities of applying active noise reduction methods that improve the working conditions, thus influencing employee performance, one may conclude that the improvement in work productivity is reflected in the financial results, which has a direct impact on the assessment of the effectiveness of such a project. Despite the high initial expenditures, this solution offers long-term benefits by improving production stability.

Is it worth carrying out initial calculations of investment returns?

To put it briefly: yes, it is. They prove helpful in decision-making processes. They represent an initial screening for decision-makers — a pre-selection of profitable and unprofitable investment projects. At that point, the management is able to establish the projected profitability even down to the operational level of the business. Reacting to productivity losses allows bosses to identify escaping revenue streams and react earlier to potential technological innovations. A preliminary assessment of cost-effectiveness is a helpful tool for making accurate and objective decisions.

References

[1] D.Begg, G.Vernasca, S.Fischer „Mikroekonomia” PWE Warszawa 2011
[2] mfiles.pl/pl/index.php/Zysk

[3] Felis P., 2005: Metody i procedury oceny efektywności inwestycji rzeczowych przedsiębiorstw. Wydawnictwo Wyższej Szkoły Ekonomiczno-Informatycznej. Warszawa.

Digital image processing

Signal processing accompanies us every day. All stimuli (signals) received from the world around sound, light, or temperature are processed into electrical signals, which are later sent to the brain. In the brain, the analysis and interpretation of the received signal takes place. As a result, we get information from the signal (e.g. we can recognize the shape of an object, we feel the heat, etc.).

Digital signal processing (DSP) works similarly. In this case, the analog signal is converted into a digital signal by an analog-digital converter. Then, using the digital computer, received signals are being processed. The DSP systems also use computer peripheral devices equipped with signal processors which allow processing of signals in real-time. Sometimes, it is necessary to re-convert the signal to an analog form (e.g. to control a device). For this purpose, digital-to-analog converters are used.

Digital signal processing has a wide range of applications. It can be used to process sound, speech recognition, or image processing. The last issue will be the subject of this article. We will deeply discuss the basic operation of convolutional filtration in digital image processing.

What is image processing?

Simply speaking, digital image processing consists in transforming the input image into an output image. The aim of this process is to select information – choosing the most important (e.g. shape) and eliminating unnecessary (e.g. noise). The digital image process features a variety of different image operations such as:

  • filtration,
  • thresholding,
  • segmentation,
  • geometry transformation,
  • coding,
  • compression.

  As we mentioned before, in this article we will focus on image filtration.

Convolutional filtration

Both in the one-dimensional domain (for audio signals) and also for two dimensions, there are specific tools for operating on signals – in this case on images. One of such tools is filtration. It consists of some mathematical operations on pixels which as a result give us a new image. Filtration is commonly used to improve image quality or to extract important features from the image.

The basic operation in the filtration method is the 2D convolutional function. It allows applying of image transformations using appropriate filters in a form of matrix coefficients. The use of filters consists of calculating a point’s new value based on the brightness values of points in the closest neighborhood. Such so-called masks containing pixel weights based on the closest pixels values are used in calculations. The usual sizes of masks are 3×3, 5×5, and 7×7. The process of image and filter convolution has been shown below.

Assuming that the image is represented by a 5×5 matrix which contains color values and the filter is represented by a 3×3 matrix, the image was modified by joining these matrices.

The first thing to do is to transpose coefficients in a filter. We assume that the center of the filtration core h(0,0) is in the middle of the matrix, as shown in the picture below. Therefore (m,n) indexes denoting rows and columns of the filter matrix will be both negative and positive.

Image filtration diagram
Img 1 Filtration diagram

Considering the filter matrix (the blue one) as inverted vertically and horizontally we can perform filtration operations. They start by placing the h(0,0) → h(m,n) element of the blue matrix over the s(-2,-2) → s(i,j) element of the yellow matrix (the image). Then we multiply the overlapping values of both matrices and add them up. In this way, we have obtained the convolution result for the (-2,-2) cell of the output image. It is important to remember the normalization process, which allows us to adjust the brightness of a result by dividing it by the sum of filter coefficients. It prevents the output image brightness from being out of a scale of 0-255 (in the case of 8-bit image representation).

The next stages of this process are very similar. We move the center of the blue matrix over the (-2,-1) cell, then again multiply the overlapping values. Next, add them together and divide the result by the filter coefficients to get the result. We consider cells that go beyond the area of the matrix s (i,j) to be undefined. Therefore, the values do not exist in these places, so we do not multiply them.

The usage of convolutional filtration

Depending on the type of filter, we can distinguish several applications of convolutional filtration. Low-pass filters are used to remove noise from images, while high-pass filters are used to sharpen or emphasize edges. To illustrate the effects of different filters, we will apply them to the real image. The picture below is a “jpg” format and was loaded in Octave software as an MxNx3 pixel matrix.

Original input image
Img 2 Original Input Image

Gaussian blur

To blur the image we need to use a convolutional function as well as the properly prepared filter. One of the most commonly used low-pass filters is the gaussian filter. It allows you to lower the sharpness of the image but also it is used to reduce the noise from it.

For this article, a 29×29 matrix based on Gaussian function with a standard deviation of 5 was generated. The normal distribution gives weights to the surrounding pixels during the process of convolution. A low-pass filter suppresses high-frequency image elements while passing low-frequency elements. The output image compared to the original one is blurry, and the noises are significantly reduced.

Blurred input image
Img 3 Blurred input image

Sharpen

We can make the image blurry but there is also a way to make it sharpen. To make it happen a suitable high-pass filter should be used. The filter passes through and amplifies image elements that are characterized by high frequency e.g. noise or edges. However, low-frequency elements are suppressed. By using this filter, the original image is sharpened – it can be easily noticed especially in the arm area.

Sharpened input image
Img 4 Sharpened input image

Edges detection

Another possible image process is called edge detection. Shifting and subtracting filters are used to detect edges on the image. They work by shifting the image and subtracting the original image from its copy. As a result of this procedure, edges are being detected, as shown in the picture below.

Edge detection
Img 5 Edge detection

BFirst.Tech experience with image processing

Our company hires well-qualified staff with experience in the field of image processing. One of our original projects was called TIRS, i.e. a platform which diagnoses areas in the human body that might be affected by cancerous cells. It works based on the use of advanced image processing algorithms and artificial intelligence. It automatically detect cancerous areas with the use of medical imaging data obtained from tomography and magnetic resonance imaging. This platform finds its use in clinics and hospitals.

Our other project, which also requires the usage of image processing, is called the Virdiamed platform. It was created in cooperation with Rehasport Clinic. This platform allows a 3D reconstruction of CT and MRI data and also allows the viewing of 3D data in a web browser. If you want to read more about our projects, click here.

Digital signal processing, including image processing, is a field of technology with a wide range of application possibilities, and its popularity is constantly growing.  Non-stopping technological progress means that this field of technology is also constantly developing. Moreover, any technologies used every day are based on signal processing, which is why it is certain that in the future the importance of DSP will continue to grow.

References

[1] Leonowicz Z.: „Praktyczna realizacja systemów DSP”

[2] http://www.algorytm.org/przetwarzanie-obrazow/filtrowanie-obrazow.html

Smart Manufacturing

New technologies are finding their place in many areas of life. One of these is an industry, where advanced technologies have been used for years and work very well for factories. The implementation of smart solutions based on advanced IT technologies into manufacturing companies has had a significant impact on technological development and improved innovation. One of them is Smart Manufacturing, which helps industrial optimisation by drawing insights from data generated in manufacturing processes.

What is meant by Smart Manufacturing?

Smart Manufacturing is a concept that encompasses the full integration of systems with collaborative production units that are able to react in real time and adapt to changing environmental conditions, making it possible to meet the requirements within the supply chain. The implementation of an intelligent manufacturing system supports the optimisation of production processes. At the same time, it contributes to increased profits for industrial companies.

The concept of Smart Manufacturing is closely related to concepts such as artificial intelligence (AI), the Industrial Internet of Things (IIoT) or cloud computing. What these three concepts have in common is data. The idea behind smart manufacturing is that the information it contains is available whenever necessary and in its most useful form. It is data analysis that has the greatest impact on optimising manufacturing processes and makes them more efficient.

IIoT and industrial optimisation

The Industrial Internet of Things is nothing more than the application of IoT potential in the industrial sector. In the intelligent manufacturing model, people, machines and processes are interconnected through IT systems. Each machine features sensors that collect vital data about its operation. The system sends the data to the cloud, where it goes through and extensive analysis. With the information obtained from them, employees have an insight into the exact process flow. Thanks to that, they are able to anticipate failures and prevent them earlier, avoiding possible downtime. In addition, companies can examine trends in the data or run various simulations based on the data. The integration of all elements of the production process also makes it possible to remotely monitor its progress in real time, as well as to react to any irregularities. All of that would not be possible if it wasn’t for the IIoT solutions.

The rise of artificial intelligence

Another modern technological solution that is used in the smart manufacturing system is artificial intelligence. Over the last few years, we have seen a significant increase in the implementation of artificial intelligence solutions in manufacturing. This is now possible, precisely because of the deployment of IIoT devices, which provide huge amounts of data used by AI. Artificial intelligence algorithms analyse the data obtained and search for anomalies in the data. In addition, they enable automated decision-making based on the collected data. What’s more, artificial intelligence is able to predict problems before they occur and take appropriate steps to mitigate them.

Benefits for an enterprise

The implementation of Smart Manufacturing technology in factories can bring a number of benefits, primarily in the optimisation of manufacturing processes. With smart manufacturing, the efficiency can be improved tremendously. By having access to data on the entire process, it is possible to react quickly to any potential irregularities or adapt the process to current needs (greater flexibility). This allows companies to avoid many unwanted events, like breakdowns. This, in turn, has a positive effect on cost optimisation while also improving the company’s profitability. Yet another advantage is better use of machinery and equipment. By monitoring them on an ongoing basis, companies can control their wear and tear, anticipate breakdowns or plan downtime in a more efficient manner. This, in turn, improves productivity and even the quality of the manufactured products.

The use of SM also enables real-time data visualisation. That makes it possible to manage – as well as monitor – the process remotely. In addition, the virtual representation of the process provides an abundance of contextual information that is essential for process improvement. Based on the collected data, companies can also run various types of simulations. They can also anticipate trends or potential problems, which greatly improves forecasting. We should also mention here that implementing modern solutions such as Smart Manufacturing in a company increases their innovativeness. Thus, companies become more competitive and employees perceive them as a more attractive place to work.

Will automation put people out of work?

With technological developments and the increasingly widespread process automation, concerns regarding losing jobs have also become more apparent. Nothing could be further from the truth – people still play a pivotal role in the concept of smart manufacturing. The responsibility of employees to control processes or make critical decisions will therefore remain unchanged. Human-machine collaboration will thus make it possible to increase the operational efficiency of the smart enterprise.

So – the intention behind technological development is not to eliminate man, but rather to support him. What’s more, the combination of human experience and creativity with the ever-increasing capabilities of machines makes it possible to execute innovative ideas that can have a real impact on improving production efficiency. At the same time, the labour market will start to see an increased demand for new experts, ensuring that the manufacturing industry will not stop hiring people.

Intelligent manufacturing is an integral part of the fourth industrial revolution that is unfolding right before our eyes. The combination of machinery and IT systems has opened up new opportunities for industrial optimisation. This allows companies to realistically increase the efficiency of their processes, thereby helping to improve their profitability. BFirst.Tech offers an Industrial Optimisation service to analyse and communicate real-time data to all stakeholders with the contained information supporting critical decision-making and results in continuous process improvement.

References

[1] https://blog.marketresearch.com/the-top-7-things-to-know-about-smart-manufacturing

[2] https://przemyslprzyszlosci.gov.pl/7-krokow-do-zaawansowanej-produkcji-w-fabryce-przyszlosci/?gclid=EAIaIQobChMIl7rb1dnD7QIVFbd3Ch21kwojEAAYASAAEgKVcfD_BwE

[3] https://www.comarch.pl/erp/nowoczesne-zarzadzanie/numery-archiwalne/inteligentna-produkcja-jutra-zaczyna-sie-juz-dzis/

[4] https://elektrotechnikautomatyk.pl/artykuly/smart-factory-czyli-fabryka-przyszlosci

[5] https://www.thalesgroup.com/en/markets/digital-identity-and-security/iot/inspired/smart-manufacturing

[6] https://www.techtarget.com/iotagenda/definition/smart-manufacturing-SM

Technology trends for 2021

For many people, 2020 will remain a memory they are not likely to quickly forget. The coronavirus pandemic has, in a short time, caused many companies to change their previous way of operating, adapting to the prevailing conditions. The issue of employee safety has become crucial, hence many companies have decided to turn to remote working mode. There is no denying that this situation has accelerated the digital transformation process in many industries, thus contributing to the faster development of modern technologies.

As they do every year, the major analyst firms publish rankings in which they present their new technology predictions for the coming year.

Internet of Behaviours

The concept of the Internet of Behaviour (IoB) emerged some time ago, but, according current for forecasts, we are going to see significant growth in 2021 and beyond. It involves collecting data about users and linking it to specific types of behaviour. The aim is to improve the process of customer profiling and thus consciously influence their behaviour and decisions they make. IoB employs many different modern technologies – from AI to facial or speech recognition. When it comes to IoB, the safety of the collected data is definitely a moot point. On top of that there are ethical and social aspects of using this data to influence consumers.

Cybersecurity

Because of the COVID-19 pandemic lot of companies now operate in remote working mode. Therefore, the question of cyber security has now become more important than ever. Currently, this is a key element in ensuring the safe operation of the organisation. With the popularisation of remote working, cyber threats have also increased. It is, therefore, anticipated that companies will invest in strengthening their security systems to make sure that their data is protected and to prevent possible cyber-attacks.

Anywhere operations

Anywhere operations model is the biggest technology trend of 2021. It is about creating an IT environment that will give people the opportunity to work from just about anywhere by implementing business solutions based on a distributed infrastructure. This type of solution will allow employees to access the organisation’s resources regardless of where they are working and facilitate the exchange and flow of information between them. According to Gartner’s forecasts, as much as 40% of organisations will have implemented this operating model in their organisation by 2023.

AI development

The list the biggest modern technologies trends of 2021 would not be complete without artificial intelligence, the steady development of which we’re constantly experiencing. AI solutions such as forecasting, speech recognition or diagnostics are used in many different industries. Machine learning models are also increasingly popular in factories, helping to increase the efficiency of their processes. Over the next few years, we will see the continued development of artificial intelligence, and the exploitation of the potential it holds.

Total Experience

Another trend that will most likely be big this year is Total Experience (TX), which is intended to bring together the differing perspectives of customers, employees and users to improve their experience where these elements become intertwined. This approach combined with modern technology is supposed to give some companies competitive edge. As a result of the pandemic most of the interactions among the aforementioned groups happens online. This is why it is so important for their respective experiences to bring them certain kind of satisfaction, which will have an actual impact on the companies’ performance.

This year’s technology trends mainly focus on the development of solutions aimed at improving remote working and the experience of moving much of our lives to the online sphere. There is no denying that the pandemic has significantly accelerated the technological development of many companies. This rings particularly true for the micro-enterprises that have had to adapt to the prevailing conditions and have undergone a digital transformation. An important aspect among the projected trends is undeniably providing cyber security, both for organisations and individuals. BFirst.Tech seeks to adapt to the growing demand for these issues, which is why it offers a Cloud and Blockchain service that employs modern technology to create secure data environments.

References

[1] https://www.gartner.com/en/newsroom/press-releases/2020-10-19-gartner-identifies-the-top-strategic-technology-trends-for-2021

[2] https://mitsmr.pl/b/trendy-technologiczne-2021/PQu9q8s0G

[3]https://www.magazynprzemyslowy.pl/artykuly/7-trendow-w-it-na-2021-rok

[4] https://www.nbc.com.pl/trendy-technologiczne-w-2021%E2%80%AFroku/

Space mining

Mining has accompanied mankind since the dawn of time. The coming years are likely to bring yet another milestone in its development: space mining.

Visions vs reality

Space mining has long fuelled the imagination of writers and screenwriters. They paint a picture of a struggle for resources between states, corporations and cultures inhabiting various regions of the universe. Some also speak of the risks faced by humanity due to possible encounters with other life forms. There is also the topic of extremely valuable minerals and other substances that are unknown on Earth but may be obtained in space.

At the moment, however, these visions are far from becoming a reality. We are in the process of cataloguing space resources, e.g. by making geological maps of the Moon [1] and observing asteroids [2]. Interestingly, the Moon is known to contain deposits of helium-3, which could be used as fuel for nuclear fusion reactions in the future. We expect to find deposits of many valuable minerals on asteroids. For example, nickel, iron, cobalt, water, nitrogen, hydrogen and ammonia available on the asteroid Ryugu. Our knowledge of space mineral resources is based mainly on astronomical observations. Direct analysis of surface rock samples for this purpose is much rarer, and analysis of subsurface rocks takes place incidentally. We can only fully analyse objects that have fallen on the Earth’s surface. As such, we should expect many more surprises to come.

First steps in space mining

What will the beginnings look like? As an activity closely linked to the economy, mining will start to develop to meet the needs of the market. Contrary to what we are used to on Earth, access to even basic resources like water can prove problematic in space.

Water

Water can be used directly by humans, and after hydrolysis, it can also serve as fuel. Thus, the implementation of NASA’s plans for a manned expedition to Mars, which will be preceded by human presence on the Moon[3], will result in a demand for water on and near the Moon. Yet another significant market for space water could be satellites. All the more so since estimations indicate that it will be more profitable to bring water from the Moon than from the Earth even into Low Earth Orbit (LEO).

For these reasons, industrial water extraction on the Moon has the potential to be the first manifestation of space mining. What could this look like in practice? Due to the intense ultraviolet radiation, any ice on the lunar surface would have decomposed into oxygen and hydrogen long ago. However, since the Moon lacks an atmosphere, these elements would inevitably escape into space. Ice is thus expected in permanently shaded areas, such as the bottoms of impact craters at the poles. One method of mining ice could be to evaporate it in a sealed and transparent tent. The energy could be sourced from the sun: one would only need to reflect sunlight using mirrors placed at the craters’ edges. At the North Pole, you can find places where the sun shines virtually all the time.

Regolith

One of the first rocks to be harvested on the Moon is likely to be regolith. Regolith is the dust that covers the Moon’s surface) While regolith may contain trace amounts of water, it is mainly hoped that it could be used for 3D printing. This would make it possible to quickly and cheaply construct all the facilities of the planned lunar base[4]. The facilities of such a base will need to protect humans against harmful cosmic radiation. And although regolith, compared to other materials, is not terribly efficient when used as radiation shielding (you need a thick layer of it), its advantage is that you do not need to ferry it from Earth.

Generally speaking, the ability to use local raw materials to the highest extent possible is an important factor in the success of space projects to create sustainable extraterrestrial habitats. Thus, optimising these processes is a key issue (click here to learn more about industry optimisation opportunities).

Asteroids

Another direction for space mining could be asteroids[5]. Scientists are considering capturing smaller asteroids and bringing them back to Earth. It is also possible to bring both smaller and larger asteroids into orbit and mine them there. Yet another option is to mine asteroids without moving them. Then only deliver the excavated material, perhaps after initial processing, to Earth.

Legal barriers

One usually overlooked issue is that apart from the obvious technological and financial constraints, the legal issues surrounding the commercial exploitation of space can prove to be a major barrier[6]. As of today, the four most important international space regulations are as follows[7]:

  • 1967 Outer Space Treaty,
  • 1968 Astronaut Rescue Agreement,
  • 1972 Convention on International Liability for Damage Caused by Space Objects, and
  • 1975 Convention on the Registration of Objects Launched into Outer Space.

They formulate the principles of the freedom and non-exclusivity of space. Also, there is description about the treatment of astronauts as envoys of mankind and the attribution of nationality to every object sent into space. They also regulate the issue of liability for damage caused by objects sent into space. However, they do not regulate the economic matters related to space exploitation. This gap is partly filled by the 1979 Moon Agreement. Although few states have ratified it (18), it aspires to create important customary norms for the coverage of space by legal provisions.

Among other things, it stipulates that the Moon’s natural resources are the common heritage of mankind and that neither the surface nor the resources of the Moon may become anyone’s property[8]. The world’s most affluent countries are reluctant to address its provisions. In particular, the US has officially announced that it does not intend to comply with the Agreement. Could it be that asteroid mining is set to become part of some kind of space colonialism?

References

[1] https://store.usgs.gov/filter-products?sort=relevance&scale=1%3A5%2C000%2C000&lq=moon

[2] http://www.asterank.com

[3] https://www.nasa.gov/topics/moon-to-mars

[4] https://all3dp.com/mit-autonomous-construction-rig-could-make-3d-printed-homes/

[5] http://space.alglobus.net/presentations/

[6] http://naukawpolsce.pap.pl/aktualnosci/news%2C81117%2Cdr-pawel-chyc-prawo-w-kosmosie-szczegolne-wyzwanie.html

[7] http://www.unoosa.org/oosa/en/ourwork/spacelaw/index.html

[8] https://kosmonauta.net/2011/09/uklad-ksiezycowy/

Data Warehouse

A data warehouse is one of the more common topics in the IT industry. The collected data is an important source of valuable information for many companies, thus increasing their competitive advantage. More and more companies use Business Intelligence (BI) systems in their work, which quickly and easily support the analytical process. BI systems are based on data warehouses and we will talk about them in today’s article.

What is a data warehouse?

A data warehouse is one of the more common topics in the IT industry. The collected data is an important source of valuable information for many companies, thus increasing their competitive advantage. More and more companies use Business Intelligence (BI) systems in their work, which quickly and easily support the analytical process. BI systems are based on data warehouses and we will talk about them in today’s article.

Characteristics

There are four main features that characterize a data warehouse. These are:

  • Subject orientation – the collected data is organized around main topics such as sales, product, or customer;
  • Integrity – the stored data is uniform, e.g. in terms of format, nomenclature, and coding structures. They are standardized before they reach the warehouse;
  • Timeliness – the data comes from different time frames, it contains both historical and current data;
  • Non-volatile – the data in the warehouse remains unchanged. The user cannot modify it, so we can be sure that we will get the same results every time.

Architecture and operation

In the architecture of a data warehouse, four basic components can be distinguished. Data sources, ETL software, the appropriate data warehouse, and analytical applications. The following graphic shows a simplified diagram of that structure.

Data warehouse graph
Img 1 Diagram of data warehouse operation

As can be seen from the graphic above, the basis for building each data warehousing system is data. The sources of this data are dispersed – they include ERP, CRM, SCM, or Internet sources (e.g. statistical data).

The downloaded data is processed and integrated and then loaded into a proper data warehouse. This stage is called the ETL process, from the words: extract, transform and load. According to the individual stages of the process, data is first taken from available sources (extract). In the next step, the data is transformed, i.e. processed in an appropriate way (cleaning, filtering, validation, or deleting duplicate data). The last step is to load the data to the target database, i.e. the data warehouse.

As we mentioned earlier, the data collected is read-only. Users call data from the data warehouse using appropriate queries. On this account, obtaining data is presented in a more friendly form, i.e. reports, diagrams, or visualizations.

Main tasks

As the main task of a data warehouse, analytical data processing (OLAP, On-Line Analytical Processing) should be distinguished. It allows for making various types of summaries, reports, or charts presenting significant amounts of data. For example, a sales chart in the first quarter of the year, a report of products generating the highest revenue, etc.

The next task of that tool is decision support in enterprises (DSS, Decision Support System). Taking into account the huge amount of information that is in the data warehouses, they are a part of the decision support system for companies. Thanks to advanced analyses conducted with the use of these databases, it is much easier to search for dominant trends, models, or relations between various factors, which may facilitate managerial decision-making.

Another of the tasks of these specific databases is to centralize data in the company. Data from different departments/levels of the company are collected in one place. Thanks to that, everyone interested has access to them whenever he or she needs them.

Centralization is connected with another role of a data warehouse, which is archiving. Because the data collected in the warehouse comes from different periods and the warehouse is supplied with new, current data on an ongoing basis, it also becomes an archive of data and information about the company.

Summary

Data warehousing is undoubtedly a useful and functional tool that brings many benefits to companies. Implementation of this database in your company may facilitate and speed up some of the processes taking place in companies. An enormous amount of data and information is generated every day. Therefore, data warehouses are a perfect answer to store this information in one, safe place, accessible to every employee. If you want to introduce a data warehousing system to your company, check our product Data Engineering.

Bibliography

[1] https://www.oracle.com/pl/database/what-is-a-data-warehouse/

Sight-playing — part 1

During their education, musicians need to acquire the ability to play a vista, that is, to play an unfamiliar piece of music without having a chance to get familiar with it beforehand. Thanks to this, virtuosos can not only play most pieces without preparation but also need much less time to learn the more demanding ones. However, it takes many a musical piece for one to learn how to play a vista. The pieces used for such practice should be little-known and matched to the skill level of the musician concerned. Therefore, future virtuosos must devote a lot of their time (and that of their teachers) to preparing such a playlist, which further discourages learning. Worse still, once used, a playlist is no longer useful for anything.

The transistor composer

But what if we had something that could prepare such musical pieces on its own, in a fully automated way? Something that could not only create the playlist but also match the difficulty of the musical pieces to the musician’s skill level. This idea paved the way for the creation of an automatic composer — a computer programme that composes musical pieces using artificial intelligence, which has been gaining popularity in recent times.

Admittedly, the word “composing” is perhaps somewhat of an exaggeration and the term “generating” would be more appropriate. Though, after all, composers create musical pieces based on their own algorithms. Semantics aside, what matters here is that such a (simple, for the time being) programme has been successfully created and budding musicians could benefit from it.

However, before we discuss how to generate musical pieces, let us first learn the basics of how musical pieces are structured and what determines their difficulty.

Fundamentals of music

The basic concepts in music include the interval, semitone, chord, bar, metre, musical scale and key of a musical piece. An interval is a quantity that describes the distance between two consecutive notes of a melody. Although its unit is the semitone, it is common practice to use the names of specific intervals. In contrast, a semitone is the smallest accepted difference between pitches (approximately 5%). While these differences can be infinitely small, it is simply that this division of intervals has become accepted as standard. A chord is three or more notes played simultaneously. The next concept is the bar, which is what lies between the vertical dashes on the stave. Sometimes a musical piece may begin with an incomplete bar (anacrusis).

Visualization of the anacrusis
Figure 1 Visualisation of an anacrusis

Metre — this term refers to how many rhythmic values are in one bar. In 4/4 metre, there should be four quarter notes to each bar. In 3/4 metre, there should be three quarter notes to each bar while 6/8 metre should have six eighth notes to each bar. Although 3/4 and 6/8 denote the same number of rhythmic values, these metres are different, the accents in them falling on different places in the bar. In 3/4 metre, the accent falls on the first quarter note (to put it correctly, “on the downbeat”). By comparison, in 6/8 metre, the accent falls on the first and fourth measures of the bar.

A musical scale is a set of sounds that define the sound material that musical works use. The scales are ordered appropriately — usually by increasing pitch. The most popular scales are major and minor. While many more scales exist, these two predominate in the Western cultural circle. They were used in most of the older and currently popular pieces. Another concept is key, which identifies the tones that musical pieces use. In terms of scale vs. key, scale is a broader term; there are many keys of a given scale, but each key has its own scale. The key determines the sound that the scale starts with.

Structure of a musical piece

In classical music, the most popular principle for shaping a piece of music is periodic structure. The compositions are built using certain elements, i.e. periods, which form a separate whole. However, several other concepts must be introduced to explain them.

motif is a sequence of several notes, repeated in the same or slightly altered form (variation) elsewhere in the work. Typically, the duration of a motif is equal to the length of one bar.

variation of a motif is a form of the motif that has been altered in some way but retains most of its characteristics, such as rhythm or a characteristic interval. musical pieces do not contain numerous motifs at once. A single piece is mostly composed of variations of a single motif. Thanks to this, each musical piece has a character of its own and does not surprise the listener with new musical material every now and then.

A musical theme is usually a sequence of 2-3 motifs that are repeated (possibly in slightly altered versions) throughout the piece. Not every piece of music needs to have a theme.

A sentence is two or more phrases.

A period is defined by the combination of two musical sentences. Below is a simple small period with its basic elements highlighted.

Scheme of the periodic structure of a musical piece
Figure 2 Periodic structure diagram of a musical piece

This is roughly what the periodic structure looks like. Several notes form a motif, a few motifs create a phrase, a few phrases comprise a sentence, a few sentences make up a period, and finally, one or more periods form a whole musical piece. There are also alternative methods of creating musical pieces. However, the periodic structure is the most common, and importantly in this case, easier to program.

Composing in harmony

Compositions are typically based on harmonic flows — chords that have their own “melody” and rhythm. The successive chords in the harmonic flows are not completely random. For example, the F major and G major chords are very likely to be followed by C major. By contrast, it is less likely to be followed by E minor and completely unlikely to be followed by Dis major. There are certain rules governing these chord relationships. However, we do not need to delve into them further since we will be using statistical models to generate song harmonies.

Instead, we need to understand what harmonic degrees are. Keys have several important chords called triads. Their basic sound, the root notes, are the subsequent notes of a given key. The other notes belong to this key, e.g. the first degree of the C major key is the C major chord, the second degree the D minor chord, the third degree the E minor chord, and so on. Harmonic degrees are denoted by Roman letters; major chords are usually denoted by capital letters and minor chords by small letters (basic degrees of the major scale: I, II, III, IV, V, VI, VII).

Harmonic degrees are such “universal” chords; no matter what tone the key starts with, the probabilities of successive harmonic degrees are the same. In the key of C major, the C – F – G – C chord sequence is just as likely as the sequence G – C – D – G in the key of G major. This example shows one of the most common harmonic flows used in music, expressed in degrees: I – IV – V – I

Melody sounds are not completely arbitrary; they are governed by many rules and exceptions. Below is an example of a rule and an exception in creating harmony:

  • Rule: for every measure of a bar, there should be a sound belonging to the given chord,
  • Exception: sometimes other notes that do not belong to the chord are used for a given measure of the bar; however, they are then followed relatively quickly by a note of this chord.

These rules and exceptions in harmony do not have to be strictly adhered to. However, if one does comply with them, there is a much better chance that one’s music will sound good and natural.

Factors determining the difficulty of a musical piece

Several factors influence the difficulty of a piece of music:

  • tempo — in general, the faster a musical piece is, the more difficult it gets, irrespective of the instrument, (especially when playing a vista)
  • melody dynamics — a melody consisting of two sounds will be easier to play than one that uses many different sounds
  • rhythmic difficulty — the more complex the rhythm, the more difficult the musical piece. The difficulty of a musical piece increases as the number of syncopations, triplets, pedal notes and similar rhythmic “variety” grows higher.
  • repetition — no matter how difficult a melody is, it is much easier to play if parts of it are repeated, as opposed to one that changes all the time. It is even worse in cases where the melody is repeated but in a slightly altered, “tricky” way (when the change of melody is easy to overlook).
  • difficulties related to musical notation — the more extra accidentals (flats, sharps, naturals), the more difficult a musical piece is
  • instrument-specific difficulties – some melodic flows can have radically different levels of difficulty on different instruments, e.g. two-note tunes on the piano or guitar are much easier to play than two-note tunes on the violin

Some tones are more difficult than others because they have more key marks to remember.

Technical aspects of the issue

Since we have outlined the musical side in the previous paragraphs, we will now focus on the technical side. To get into it properly, it is necessary to delve into the issue of “conditional probability”. Let us start with an example.

Suppose we do not know where we are, nor do we know today’s date. What is the likelihood of it snowing tomorrow? Probably quite small (in most places on Earth, it never or hardly ever snows) so we will estimate this likelihood at about 2%. However, we have just found out that we are in Lapland. This land is located just beyond the northern Arctic Circle. Bearing this in mind, what would the likelihood of it snowing tomorrow be now? Well, it would be much higher than it had been just now. Unfortunately, this information does not solve our conundrum since we do not know the current season. We will therefore set our probability at 10%. Another piece of information that we have received is that it is the middle of July — summer is in full swing. As such, we can put the probability of it snowing tomorrow at 0.1%.

Conditional probability

The above story allows us to easily draw a conclusion.  Probability depended on the state of our knowledge and could vary in both ways based on it. This is how conditional probabilities, which are denoted as follows, work in practice:

P(A|B)

They inform us of how probable it is for an event to occur (in this case, A) if some other events have occurred (in this case, B). An “event” does not necessarily mean an occurrence or incident — it can be, as in our example, any condition or information.

To calculate conditional probabilities we must know how often event B occurs and how often events A and B occur at the same time. It will be easier to explain it by returning to our example. Assuming that A is snow falling and B is being in Lapland, the probability of snow falling in Lapland is equal to:

probability of snow in Lapland

The same equation, expressed more formally and using the accepted symbols A and B, would be as follows:

conditional probabilities formula

Note that this is not the same as the likelihood of it snowing in Lapland. Perhaps we visit Lapland more often in winter and it is very likely to snow when we are there?

Now, to calculate this probability exactly, we need two statistics:

  • NA∩B — how many times it snowed when we were in Lapland,
  • NB — how many times have we been to Lapland,

and how many days we have lived so far (or how many days have passed since we started keeping the above statistics):

  • NTOTAL.

We will use this data to calculate P(A∩B) and P(B) respectively:

Probability formulas

At last, we have what we expected:

probability formula

The probability of it snowing if we are in Lapland is equal to the ratio of how many times it snowed when we were in Lapland to how many times we were in Lapland. It is also worth adding that the more often we have been to Lapland, the more accurate this probability will be (if we have spent 1,000 days in Lapland, we will have a much better idea about it than if we have been there 3 times).

N-grams

The next thing we need to know before taking up algorithmic music composition is N-grams, that is, how to create them and how to use them to generate probable data sequences. N-grams are statistical models. One N-gram is a sequence of elements of length equal to N. There are 1-grams, 2-grams, 3-grams, etc. Such models are often used in language modelling. They make it possible to determine how probable it is for a sequence of words to occur.

To do that, you take a language corpus (lots of books, newspapers, websites, forum posts, etc.) and count how many times a particular sequence of words occurs in it. For example, if the sequence “zamek królewski” [English: king’s castle] occurs 1,000 times in the corpus and the sequence “zamek błyskawiczny” [English: zip fastener]  occurs 10 times, this means that the first sequence is 100 times more likely than the second. Such information can prove useful to us. They allow us to determine how probable every sentence is.

Safety of IoT devices

The Internet of Things (IoT) is entering our lives at an increasingly rapid pace. Control of lighting or air conditioning commanded by smartphones is slowly becoming an everyday reality. Additionally, many companies more and more willingly introduce to their processes the solutions provided by IoT. According to the latest forecasts, by 2027 41 billion IoT devices will be connected to the internet. There is no doubt that IoT offers great opportunities. However, at the same time, there is no denying that it can also bring whole new threats. It is therefore worthwhile to be aware of the dangers that may be associated with the use of IoT.

The total number of device installations for IoT is growing every year
Img 1 The total number of device installations for IoT

Threats

Hacking attacts

An extensive network of IoT devices creates many opportunities for hacking attacks. Whereby the space that could potentially be attacked increases with the amount of IoT devices in operation. It is enough that the attacker will hack into one of these devices and gain access to the entire network and to the data that flows through. This poses a real threat to both individuals and companies.

The loss of data

The loss of data is one of the most frequently mentioned threats posed by IoT. Improper storage of sensitive data such as names, addresses, PESEL (personal identity number), or payment card numbers can expose us to the danger of being used in an undesirable way for us (e.g. taking credit, stealing money). Moreover, based on data collected by home IoT devices, the attacker can easily learn about the habits of the household, which can facilitate sophisticated scams.

Botnet attact

Another threat is the risk of the IoT device being included in the so-called botnet. The botnet is a network of infected devices that hackers can use to carry out various types of attacks. Most often a common botnet attack is a DDoS attack (Distributed Denial of Service). It consists of combining the website with multiple devices at the same time, which can lead to its temporary unavailability. Another example of how a botnet works is the use of infected devices to send spam or produce a crypto valent. All these attacks are carried out in a manner unnoticeable to the owner of the device. It is enough that we click on a link from an unknown source that may contain malware. Then we unconsciously become part of a botnet attack.

Attacts on machines

From a company’s point of view, attacks on industrial robots and machines, which are connected to the network, can be a significant threat. Taking over control of such devices can cause serious damage to companies. For example, hackers can change the production parameters of a component in such a way that they will not be caught right away, but it will make this component useless. Attackers can also cause disturbances in the operation of machines or interruptions in energy supply. These activities are a serious threat to companies, that could suffer huge financial losses as a result.

How can we protect ourselves?

It may seem that it is impossible to eliminate the dangers of using IoT technology. However, there are solutions that we can implement to increase the safety of our devices. Here are some of them:

Strong password

An important aspect in the security of IoT devices is password strength. Very often users have simple passwords, containing data that is easy to identify (e.g. names or date of birth). It often happens that the password is the same for several devices, making it easier to access them. Also, sometimes users do not change the standard password that is set by the manufacturer of the device. It is therefore important that the password is not obvious. Increasingly often, manufacturers force users to have strong passwords by setting the conditions they must meet. It is demanded to use upper and lower-case letters, numbers, and special characters. This is a very good practice that can increase security on the network.

Software update

Another way is to regularly update the software used by IoT devices. If manufacturers will detect a vulnerability in their security, they can protect users from a potential attack. They can provide them with a new version of the software that eliminates the deficiencies detected. Ideally, the device should be set for automatic system updates. Then we can be sure that the device always works on the latest software version.

Secure home network

Securing your home network is as important as setting a strong access password. In this case, it is also recommended to change the original password set by the router provider. Additionally, the home Wi-Fi network should use an encrypted connection such as WPA2-PSK.

Consumptionary restraint

Before buying a given device, it is good to consider whether we need it. There is no point in treating it more just like a cool gadget. Let’s remember that every subsequent IoT device in our environment increases the risk of a potential attack.

All the above-mentioned actions are the ones, which should be taken by users of IoT devices. However, the manufacturer of the device also takes care of its protection, such as via the encryption of network messages, which secures the interception of data during transport is on its side. The most commonly used protection is the TLS protocol (Transport Layer Security). TLS protocol helps secure the data that is transmitted over the network. In addition, the manufacturer of the device should regularly check its security features, so that it will be able to catch any gaps and eliminate them. It is also good to keep the devices secure from the beginning before automatic connection to open public networks.

In June 2019 the Cybersecurity Act was established, which aims at strengthening the cyber security of EU Member States. It regulates the basic requirements to be met by products connecting to the network, which contributes to the safety of these devices. Rapid IoT development makes more similar regulations, which will significantly contribute to maintaining global cyber security.

Summary

The advent of IoT technology has brought a huge revolution, both for individuals and for the whole of companies. Although IoT brings many benefits and facilitations, you must also be aware that it may pose a threat to the security of our data or ourselves. However, it is worth remembering that compliance with a few of our principles can make a significant contribution to the safety of your IoT equipment.

References

[1] https://www.businessinsider.com/internet-of-things-report?IR=T

[2] https://medium.com/read-write-participate/minimum-standards-for-tackling-iot-security-70f90b37f2d5

[3] https://www.cyberdb.co/iot-security-things-you-need-to-know/

[4] https://www.politykabezpieczenstwa.pl/pl/a/czym-jest-botnet

[5] https://www.cyberdefence24.pl/rewolucja-w-cyberbezpieczenstwie-ue-akt-ws-cyberbezpieczenstwa-wchodzi-w-zycie

Industrial noise

Industrial noise is nowadays just as important a problem like air pollution or waste management. However, it seems to be less popular in the media. Meanwhile, it can equally affect our well-being or health. The Act of 27 April 2001 Environmental Protection Law treats noise as pollution. Therefore, the same general principles of conduct should be adopted for other environmental pollution, e.g. air or soil pollution.

Industrial noise is nowadays just as important a problem like air pollution or waste management. However, it seems to be less popular in the media. Meanwhile, it can equally affect our well-being or health. The Act of 27 April 2001 Environmental Protection Law treats noise as pollution. Therefore, the same general principles of conduct should be adopted for other environmental pollution, e.g. air or soil pollution.

The noise generated in industrial halls concerns the issue of noise in the workplace. Industrial halls are in the vast majority huge, often high spaces, through which noise generated by machines and people spreads. Depending on the size of the surface of such a hall, and the number of machines working on it, the noise problem can be large, but within certain standards. Unfortunately, in many cases, it exceeds acceptable norms, which has negative consequences.

Employee working conditions

The conditions in the workplace are precisely described in the act by which noise standards are defined. The Act sets the Maximum Permissible Intensity (pol. Najwyższe Dopuszczalne Natężenie). Maximum Permissible Intensity means the intensity of a physical factor harmful to health, whose impact during work should not cause negative changes in the employee’s state of health. For 8-hour or weekly operation it is 85 dB. If the noise exceeds this standard continuously, it may cause problems for employee health. Moreover, a company can be exposed to penalties due to the lack of proper working conditions.

What if the noise is more than the allowable 85 dB, but through the day’s work? In this case, appropriate recommendations are also adopted. Work in constant noise within 95-100 dB may not last more than 40-100 minutes a day. Working in noise up to 110 dB can’t exceed more than 10 minutes a day.

How can we handle industrial noise?

One of the most common ways to protect employees’ health against noise is to equip them with noise-absorbing earmuffs. There are many types of such devices on the market that are equipped e.g. with a noise reduction system. Often they even enable communication between employees without removing the device. However, this is not a specific solution that prevents the occurrence of noise or vibration of working machines. It just limits the effects of their impact on employees working in this place.

The appearance of the production halls is also an important issue. For new halls, at the design stage solutions that effectively reduce the spread of noise are taken into account. It is difficult to apply such solutions in halls that are outdated or have limited reconstruction possibilities. The costs of such modernizations are usually too high above the effects. Considering these issues, other noise reduction methods must be used including active and passive methods.

Active Noise Control

Active Noise Control (ANC) is a method of reducing unwanted sound by adding another sound source specially designed to cancel the original noise. Adding noise and anti-noise together allows you to achieve a more satisfactory result. BFirst.tech has its own ANC solution, which is equipped with an artificial intelligence algorithm that allows reduction of industrial noise to an acoustic background level in the range of 50-500 Hz.

Active Noise Control controller
Img 1 Active Noise Control device enclosure

The system includes an algorithm of real-time adaptation to industrial noise changes. For example, the advantage of such a solution is that it does not work rigidly after programming, but can react to changes in machine operation, e.g. a change in the rotational speed of the mechanical system. The system is designed for both open and closed rooms, which means that it is ideal for industrial halls.

Passive Noise Control

Noise problems can be solved in many ways. Another option is to use passive methods. Passive methods are e.g. systems of acoustic systems in the case of noise (absorbers, mats, acoustic panels) or Vibro-isolating systems. The key to properly designing passive solutions is to study the work environment, locate sources of noise and how it propagates, and tailor efficient solutions to it. These solutions are primarily the arrangement of individual elements in the work environment and the selection of materials with parameters that will effectively absorb the resulting noise.

What path to choose?

To effectively fight noise and vibrations in industry and the environment we offer our innovative solution Intelligent Acoustics. If you want to read more about the functionalities of Intelligent Acoustics, click here.

References

[1] http://www.prawo.pl/kadry/halas-w-srodowisku-pracy,186770.html

[2] http://forbes.pl/kariera/dopuszczalny-poziom-halasu-w-miejscu-pracy-obowiazki-pracodawcy/kmvctgb

[3] http://acoustics.org.pl/

[4] https://aes2.org/

Generative Adversarial Networks

GANs, i.e. Generative Adversarial Networks, were first proposed by University of Montreal students Ian Goodfellow and others (including Yoshua Bengio) in 2014. In 2016, Facebook’s AI research director and New York University professor Yann LeCun called them “the most interesting idea in the last 10 years in machine learning”.

In order to understand what GANs are, it is necessary to compare them with discriminative algorithms like the simple Deep Neural Networks (DNNs). For an introduction to neural networks, please see this article. For more information on Convolutional Neural Networks, click here.

Let us use the issue of predicting whether a given email is spam or not as an example. The words that make up the body of the email are variables that determine one of two labels: “spam” and “non-spam”. The discriminator algorithm learns from the input vector (the words occurring in a given message are converted into a mathematical representation) to predict how much of a spam message the given email is, i.e. the output of the discriminator is the probability of the input data being spam, so it learns the relationship between the input and the output.

GANs do the exact opposite. Instead of predicting what the input data represents, they try to predict the data while having a label. More specifically, they are trying to answer the following question: assuming this email is spam, how likely is this data?

Even more precisely, the task of Generative Adversarial Networks is to solve the issue of generative modelling, which can be done in 2 ways (you always need high-resolution data, e.g. images or sound). The first possibility is density estimation — with access to numerous examples, you want to find the density probability function that describes them. The second approach is to create an algorithm that learns to generate data from the same training dataset (this is not about re-creating the same information but rather creating new information that could be such data).

What generative modelling approach do GANs use?

This approach can be likened to a game played by two agents. One is a generator that attempts to create data. The other is a discriminator that predicts whether this data is true or not. The generator’s goal is to cheat the other player. So, over time, as both get better at their task, it is forced to generate data that is as similar as possible to the training data.

What does the learning process look like?

The first agent, i.e. the discriminator (it is some differentiable function D, usually a neural network), gets a piece of the training data as input (e.g. a photo of a face). This picture is then called  (it is simply the name of the model input) and the goal is for D(x) to be as close to 1 as possible — meaning that x is a true example.

The second agent, i.e. the generator (differentiable function G; it is usually a neural network as well), receives white noise z (random values that allow it to generate a variety of plausible images) as input. Then, applying the function G to the noise z, one obtains x (in other words, G(z) = x). We hope that sample x will be quite similar to the original training data but will have some problems — such as noticeable noise — that may allow the discriminator to recognise it as a fake example. The next step is to apply the discriminant function D to the fake sample x from the generator. At this point, the goal of D is to make D(G(z)) as close to zero as possible, whereas the goal of G is for D(G(z)) to be close to one.

This is akin to the struggle between money counterfeiters and the police. The police want the public to be able to use real banknotes without the possibility of being cheated, as well as to detect counterfeit ones and remove them from circulation, and punish the criminals. At the same time, counterfeiters want to fool the police and use the money they have created. Consequently, both the police and the criminals are learning to do their jobs better and better.

Assuming that the hypothetical capabilities of the police and the counterfeiters — the discriminator and the generator — are unlimited, then the equilibrium point of this game is as follows: the generator has learned to produce perfect fake data that is indistinguishable from real data, and as such, the discriminator’s score is always 0.5 — it cannot tell if a sample is true or not.

What are the uses of GANs?

GANs are used extensively in image-related operations. This is not their only application, however, as they can be used for any type of data.

Style Transfer by CycleGAN
Figure 1 Style Transfer carried out by CycleGAN

For example, the DiscoGAN network can transfer a style or design from one domain to another (e.g. transform a handbag design into a shoe design). It can also generate a plausible image from an item’s sketch (many other networks can do this, too, e.g. Pix2Pix). Known as Style Transfer, this is one of the more common uses of GANs. Other examples of this application include the CycleGAN network, which can transform an ordinary photograph into a painting reminiscent of artworks by Van Gogh, Monet, etc. GANs also enable the generation of images based on a description (StackGAN network) and can even be used to enhance image resolution (SRGAN network).

Useful resources

[1] Goodfellow I., Improved Techniques for Training GANs, https://arxiv.org/abs/1606.03498
2016, https://arxiv.org/pdf/1609.04468.pdf

[2] Chintala S., How to train a GAN, https://github.com/soumith/ganhacks

[3] White T., Sampling Generative Networks, School of Design, Victoria University of Wellington, Wellington

[4] LeCun Y., Mathieu M., Zhao J., Energy-based Generative Adversarial Networks, Department of Computer Science, New York University, Facebook Artificial Intelligence Research, 2016, https://arxiv.org/pdf/1609.03126v2.pdf

References

[1] Goodfellow I., Tutorial: Generative Adversarial Networks [online], “NIPS”, 2016, https://arxiv.org/pdf/1701.00160.pdf
[2] Skymind, A Beginner’s Guide to Generative Adversarial Networks (GANs) [online], San Francisco, Skymind, accessed on: 31 May 2019
[3] Goodfellow, Ian, Pouget-Abadie, Jean, Mirza, Mehdi, Xu, Bing, Warde-Farley, David, Ozair, Sherjil, Courville, Aaron, and Bengio, Yoshua. Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672–2680, 2014
[4] LeCun, Y., What are some recent and potentially upcoming breakthroughs in deep learning?, “Quora”, 2016, accessed on: 31 May 2019, https://www.quora.com/What-are-some-recent-and-potentially-upcoming-breakthroughs-in-deep-learning
[5] Kim T., DiscoGAN in PyTorch, accessed on: 31 May 2019, https://github.com/carpedm20/DiscoGAN-pytorch