International Journal of Sensors Wireless Communications and Control - Volume 11, Issue 5, 2021
Volume 11, Issue 5, 2021
-
-
AIoT Data Management, Analytics and Decision Making (Artificial Intelligence of Things Data Management, Analytics and Decision Making)
By Ivan IzoninNowadays, the fast development of hardware for IoT-based systems creates appropriate conditions for the development of services for different application areas. As we know, the large number of multifunctional devices, which are connected to the Internet is constantly increasing. Today, most of the IoT devices just only collect and transmit data. The huge amount of data produced by these devices requires efficient and fast approaches to its analysis. This task can be solved by combining Artificial Intelligence and IoT tools. Essentially, AI accelerators can be used as a universal sensor in IoT systems, that is, we can create Artificial Intelligence of Things (AIoT). AIoT can be considered like a movement from data collection to knowledge aggregation. AIoT-based systems are being widely implemented in many high-tech industrial and infrastructure systems. Such systems are capable of providing not only the ability to collect but also analyse various aspects of data for identification, planning, diagnostics, evaluation, monitoring, optimization, etc., at the lower level in the entire system's hierarchy. That is, they are able to work more efficiently and effectively by generating the knowledge that is needed for real-time analytics and decision-making in some application areas.
-
-
-
Formation of Hypercubes Based on Data Obtained from Systems of IoT Devices of Urban Resource Networks
Aim: The aim of this work is to construct a system of interrelated procedures that are used in the formation of data warehouses for information systems in smart cities. Objectives: Currently, the number of IoT devices integrated into the systems of information technology support of the processes taking place in supply networks of resources of Smart City, is constantly growing. While functioning, extensive sets of various types of information are generated. Using a single database, it is not possible to store data sets collected for multiple resource networks on a large city scale. Methods: This work is devoted to the research and improvement of processes of formation of data hypercubes, which are an informational model of functioning of urban resource networks on the basis of data coming from systems of IoT devices. Methods of information modeling and construction of data hypercubes were used in research, taking into account the high dynamics and rapidity of resource supply processes. Results: In this study, mainly the data hypercubes were constructed on the basis of the proposed improved methods that work based on messages received from real-functioning systems of IoT devices in urban resource networks. The use of this technology in the framework of information and technological support of the processes in urban resource networks provides an opportunity for detailed analysis of the current state of material resources of the city, and identifies trends in their changes and states by comparing data belonging to different time periods and different by origin collections. Conclusion: The basic characteristics of the data warehouse, which provides information about urban resource networks, are analyzed. The analysis and construction of data hypercubes prototypes on urban resource networks were carried out. Classification and parameterization of many categories and attributes were carried out to describe processes in urban resource networks, which allowed, using the OLAP information technology based on hypercubes, to develop information technology of complex multidimensional data analysis, obtained using IoT devices that characterize the flow of processes in the urban resource networks. The obtained results propose the procedures of formation of data hypercubes, containing information about the progress of urban resource networks, used in the development of real information systems that collect data using IoT devices. The information technology of process support in urban resource networks is developed for use in different types of resource networks in the presence of several providers.
-
-
-
Mathematical Modeling of the Availability of the Information System for Critical Use to Optimize Control of its Communication Capabilities
Authors: Oleg V. Bisikalo, Viacheslav V. Kovtun, Oksana V. Kovtun and Oksana M. DanylchukBackground: One of the communication characteristics is the availability of information system for critical use (ISCU), which provides the allocation of computational resources in a finite volume, limited by the concept of a virtual machine (VM) to the authorized person (AP) in response to his input request and access to critical data in accordance with the created control schemes, taking into account privileges of AP in the form of system security policy rules. Objective: The objective of the article is to optimize the communication capabilities of the information system for critical use, to synthesize a mathematical concept of availability oriented to practical application. Methods: The article presents new mathematical models for controlling the availability of ISCU, which, unlike the existing ones, take into account the features of the ISCU topology, the rules, and the essence of its service operations while controlling the access process of APs to the information environment (IE) of the system. These models also formalize the connection of the set of service operations with the set of system responses to the input requests from the APs in the form of a controlled semi-Markov process with reserving resources for the self-security of a system from the consequences of the actions of the APs. On the basis of suggested models, the mathematical programming task was formulated that allows to identify the optimal strategy for managing the availability of the ISCU by minimizing the costs of its functioning and to obtain a stochastic estimation of the availability of the system at any stage of its life cycle. Results: Based on the created mathematical models, simulation of the availability of ISCU was performed using the Matlab software environment. The research results showed that the rules for responding to incoming requests from APs based on the proposed models, depending on the system load and service operations performed in the system IE, make it possible to maintain the probability of incoming requests being rejected from APs within specified limits, minimizing the cost of functioning the ISCU. However, analysis of empirical results showed that during the time of construction of the system security policy rules based on the proposed model of availability of ISCU with reserving resources to ensure the security of the system IE from the actions of APs with the rapidly increasing intensity of incoming requests from APs with high values of the danger characteristic, the number of access rejections starts to increase quadratically. In general, the obtained experimental results confirmed the adequacy of the proposed mathematical models for the availability of ISCU. Conclusion: The study proposes the mathematical models of the availability of the information system for critical use to optimize control of its communication capabilities. Studies have shown that to neutralize the consequences of the above situation leading to a decrease in the availability of ISCU, it is necessary to lay a 20% reserve of system resources at the design stage.
-
-
-
The Method and Simulation Model of Element Base Selection for Protection System Synthesis and Data Transmission
Authors: Ivan Tsmots, Vasyl Teslyuk, Taras Teslyuk and Yurii LukashchukBackground: The modern stage of the development data protection and transmission systems is characterized by an extension of the application areas, most of which require encryption (decryption) and encoding (decoding) in real-time on hardware that satisfy the restrictions on the dimensions, energy, cost and development time. In this connection, the problem of choosing an elemental base for the synthesis of neuro-like structures of symmetric encryption-decryption of data and means of encoding-decoding when transmitting data using noise-like codes becomes especially relevant. Methods: Software implementation of data protection and transmission using noise-like codes involves the use of universal and functionally oriented microprocessors. In the programmatic implementation of neuro-like algorithms for encryption and decryption of data, computational processes are mostly expandable in time with a large volume of information transfer between the RAM and operating devices. Results: In this work, a series of element base selection simulations and synthesis of DPTS were performed with various technical parameters. During each simulation, the microcontroller, memory blocks, operating nodes and communication interfaces are selected. The simulation was performed on a MacBook Pro 2015 with a Core i7 processor and 16 GB of RAM. Conclusion: It has been proposed to implement the real-time DPTS using noise-like codes by combining universal and special approaches based on the processor core supplemented by hardware with a table-algorithmic implementation of encryption (decryption) processes. The method of selection of the elemental base for the synthesis of DPTS has been improved. A simulation model of element base selection for the synthesis of the DPTS has been developed. The system of synthesis of data protection and data transmission using noise-like codes has been developed.
-
-
-
Recovery of Missing Sensor Data with GRNN-based Cascade Scheme
Authors: Roman Tkachenko, Ivan Izonin, Ivanna Dronyuk, Mykola Logoyda and Pavlo TkachenkoBackground: Today, IoT-based systems are widely used in various applications. Intellectual analysis of the data collected by IoT devices is an important task for the efficient and successful functioning of such systems. In particular, the reliability of such kind of analysis has greatly influenced the ability to partially or fully automate certain processes or subsystems. However, imperfect devices of data collection, transportation errors, etc. lead to gaps in the data. Such incompetency makes an effective intellectual data analysis for the specific tasks impossible. That is why there must be a scientific solution to this problem of effectively filling the gaps in the data collected by the IoT sensors. Methods: The authors propose a new prediction method for missing data recovery tasks based on the use of General Regression Neural Networks (GRNN). Results: The possibility of approximation and partial elimination of the error of this computational intelligence tool has been analytically proved. A prediction method based on the use of GRNN was developed. The optimal parameters of the developed method were selected. The simulation of its work was performed to solve the problem of recovery of missing sensor data in the air pollution monitoring dataset. Conclusion: The highest accuracy of the developed method in comparison with other methods of this class was experimentally established. All advantages and disadvantages are described. Perspectives of further research are outlined.
-
-
-
Evaluation of the Effectiveness of Different Image Skeletonization Methods in Biometric Security Systems
Background: Systems of the Internet of Things are actively implementing biometric systems. For fast and high-quality recognition in sensory biometric control and management systems, skeletonization methods are used at the stage of fingerprint recognition. The analysis of the known skeletonization methods of Zhang-Suen, Hilditch, Ateb-Gabor with the wave skeletonization method has been carried out and it shows good time and qualitative recognition results. Methods: The methods of Zhang-Suen, Hildich and thinning algorithm based on Ateb-Gabor filtration, which form the skeletons of biometric fingerprint images, are considered. The proposed thinning algorithm based on Ateb-Gabor filtration showed better efficiency because it is based on the best type of filtering, which is both a combination of the classic Gabor function and the harmonic Ateb function. The combination of this type of filtration makes it possible to more accurately form the surroundings where the skeleton is formed. Results: Along with the known ones, a new Ateb-Gabor filtering algorithm with the wave skeletonization method has been developed, the recognition results of which have better quality, which allows to increase the recognition quality from 3 to 10%. Conclusion: The Zhang-Suen algorithm is a 2-way algorithm, so for each iteration, it performs two sets of checks during which pixels are removed from the image. Zhang-Suen's algorithm works on a plot of black pixels with eight neighbors. This means that the pixels found along the edges of the image are not analyzed. Hilditch thinning algorithm occurs in several passages, where the algorithm checks all pixels and decides whether to replace a pixel from black to white if certain conditions are satisfied. This Ateb-Gabor filtering will provide a better performance, as it allows to obtain more hollow shapes, organize a larger range of curves. Numerous experimental studies confirm the effectiveness of the proposed method.
-
-
-
Comparison of Q-Coverage P-Connectivity Sensor Node Scheduling Heuristic Between Battery Powered WSN & Energy Harvesting WSN
Authors: Sunita Gupta, Sakar Gupta and Dinesh GoyalA serious problem in Wireless Sensor Networks (WSNs) is to attain high-energy efficiency as battery powers a node, which has limited stored energy. They can not be suitably replaced or recharged. The appearance of renewable energy harvesting techniques and their combination with sensors give Energy Harvesting Wireless Sensor Networks (EH-WSNs). Therefore, the area shifts from energy preservation to the reliability of the network. For reliability, Coverage and Connectivity are important Quality of Service (QoS) parameters. Many sensor node scheduling heuristics have been developed in the past. Some of them focus on more than a single order of Coverage and Connectivity. If reliability is the main concern for a WSN, then there definitely is a need to incorporate Q-Coverage and P-Connectivity in WSN. The lifetime of a WSN decreases while considering Q-Coverage and P-Connectivity. After some time network dies and it is of no use. Therefore, there is a trade-off between the reliability and lifetime of a WSN. If EH-WSN is used, the lifetime of a WSN increases, as well as Q-Coverage and P-Connectivity QoS parameters can be used for achieving reliability. This paper proposes a sensor node scheduling heuristic for EH-WSN, considering Q-Coverage and P-Connectivity. A comparison of a Q-Coverage and PConnectivity sensor node scheduling heuristic, when used between battery powered WSN and EH-WSN, is done. The comparison shows that lifetime using EH-WSN is much greater as compared to battery powered WSN. This research is beneficial for real-time WSN applications where EH-WSN provides power regularly without any intervention.
-
-
-
Telematics Healthcare Through Digital Terrestrial Television Networks: Applications and Perspectives
Authors: Konstantinos Kardaras, George I. Lambrou and Dimitrios KoutsourisBackground: The trend nowadays is the interconnection of rural health centers with a specialized monitoring medical center. Aim: The paper investigates the qualitative and quantitative characteristics that degrade the perceived video picture quality for a variety of short MPEG-4 video clips. Methods: In the present work, our approach was considered in three branches; first the customization of video quality in terms of video and data compression, so maximum quality is achieved, second the use of the optimized video data transfer for emergency situations, and third, the subsequent creation of a “virtual doctor”. Results: Further on, novel architectures will be suggested in order to achieve the creation of a broadcasting network for healthcare telematics. Remote medical supervision and healthcare treatment will be provided to remote patients in rural health centers and understaffed areas that lack telemetry infrastructures and medical expertise personnel. Conclusion: The proposed architecture, along with the optimized data transmission, demonstrates the integration of present and forthcoming telecommunication network technologies enabling remote interactivity for fixed, portable and mobile users.
-
-
-
Video Steganography Using DNA and Chaotic Sequence
Authors: Munesh C. Trivedi and Ranjana JoshiBackground: Several activities or tasks which are performed with the help of digital technologies not only make life easier but also very fast. The world without digital technologies realization seems to be very hard. These activities involve the exchange of some messages, which sometimes needs security, in terms of confidentiality, integrity, and availability. Objective: To develop an algorithm which can withstand against the snooping attack and traffic analysis. Methods: DNA concept is used to perform the encryption process. After the encryption process has been performed, the resultant cipher text obtained can be made hidden into some of the randomly selected frames of video acting as a cover medium. The chaotic sequence has been used to determine the random frame of carrier video. Results: Performance of proposed approach was evaluated on the different parameters which was satisfactory. Results obtained through histogram analysis clearly reflect observations that visual detection is very hard to observe. If spectral analysis is performed on the output generated by the encryption procedure of the proposed algorithm, the result will be an encrypted message. Conclusion: Encrypting the required plain text, choosing a random frame and then hiding encrypted text in LSB of the chosen frame are the key concepts of the proposed algorithm. The strength of DNA and Chaotic sequence is exploited in the proposed algorithm. Based on the survey and results obtained, the strength of the proposed algorithm is claimed.
-
-
-
Acumen Message Drop Scheme (AMD) in Opportunistic Networks
Authors: Halikul Lenando, Aref Hassan K. Ali and Mohamad AlrfaayBackground: In traditional networks, nodes drop messages to free up enough space for buffer optimization. However, keeping messages alive until they reach their destination is crucial in Opportunistic Networks. Therefore, this paper proposes an Acumen Message Drop scheme (AMD) that considers the impact of the message drop decision on data dissemination performance. Methods: To achieve this goal, AMD drops the message based on the following considerations: the estimated time of the arrival of the message to its destination, message time to live, message transmission time, and the waiting time of the message in the queue. AMD scheme works as a plug-in in any routing protocol. Results: Performance evaluation shows that the integration of the proposed scheme with the PRoPHET routing protocol may increase efficiency by up to 80%, while if integrated with an Epidemic routing protocol, efficiency increases by up to 35%. Moreover, the proposed system significantly increases performance in the case of networks with limited resources. Conclusion: To the best of our knowledge, most of the previous works did not address the issue of formulating the message drop decision in the non-social stateless opportunistic networks without affecting performance.
-
-
-
An Iterative Power Regulation Technique in Emergency Cognitive Radio Network Based on Firefly Algorithm
Authors: Swagata R. Chatterjee, Supriya Dhabal, Swati Chakraborti and Mohuya ChakrabortyBackground: Manmade disasters like explosion, toxic wastes, chemical spills etc. have become an imperative concern for our society. Manmade disasters are not an everyday phenomenon. Thus, the allocation of separate resources is not realistic. The idea is to use existing cellular infrastructure to implement single hop cognitive radio sensor network for protecting human being from manmade disasters. Objectives: The main objectives of this paper are as follows: (a) design of an efficient iterative power regulation algorithm based on the Firefly Algorithm for the proposed network, (b) computation of sensor nodes’ optimal power for different position of the cellular user from the base station assuming that cellular user compromises power for its own sensor node, (c) computation of the maximum number of sensor nodes coupled with single cellular user for different distances from the base station in worst channel condition, and (d) comparative performance analysis with state-of-the-art algorithms. Methods: In the presence of explosive and toxic gases, cognitive radio in sensor node establishes a connection with the nearest base station to send pre-disaster alert signal utilizing cellular user’s resources. The power is distributed among sensor nodes maintaining the fundamental requirements of cellular users. Here, an iterative power regulation mechanism is employed for distributing the power between sensor node and cellular user to achieve reliable utility of the network. The fitness function is designed under the constraints of interference and the designed algorithm is implemented in MATLAB platform towards the search of optimal power of sensor nodes by maximizing the fitness function. Results: Comparative performance analysis demonstrates the effectiveness of the proposed algorithm in terms of speed of convergence, the position of mobile phone user from the base station, number of coupled sensor nodes with single cellular user, and Jain’s fairness factor. Conclusion: The proposed network controls the occurrence of manmade disasters and achieves reliable transmission of emergency information prior to disaster without disrupting cellular phone users. Simulation results validate that the proposed IPRFA algorithm outperforms with respect to state-ofthe- art methods in terms of sharing power and Jain’s fairness factor.
-
Most Read This Month
