Recent Patents on Computer Science - Volume 10, Issue 2, 2017
Volume 10, Issue 2, 2017
-
-
The 1-Good-Neighbor Diagnosability of Alternating Group Graph Networks Under the PMC Model and MM* Model
Authors: Ji Jirimutu and Shiying WangBackground: Many multiprocessor systems have interconnection networks as underlying topologies, also described in various patents, and an interconnection network is usually represented by a graph where nodes represent processors and links represent communication links between processors. For the system, study of the topological properties of its interconnection network is important. In 2012, Peng et al. proposed a new measure for fault diagnosis of the system, namely, the g-goodneighbor diagnosability (which is also called the g-good-neighbor conditional diagnosability), which requires that every fault-free node contains at least g fault-free neighbors. The n-dimensional alternating group graph network ANn has been proved to be an important viable candidate for interconnecting a multiprocessor system. The feature of ANn includes low degree of node, small diameter, symmetry, and high degree of fault-tolerance. Results: In this paper, we prove that the 1-good-neighbor diagnosability (which is also called the nature diagnosability) of ANn is 2n-4 for n >5 under the PMC model and MM* model, the nature diagnosability of 4-dimensional alternating group graph network AN4 under the PMC is 4 and the nature diagnosability of AN4 under the MM* model is 3. Conclusion: In this paper, we investigate the problem of the nature diagnosability of AN4 under the PMC model and MM* model. It is proved that the nature diagnosability of ANn under the PMC model and MM* model is 2n-4 when n >5. The above results show that the nature diagnosability is several times larger than the classical diagnosability of ANn depending on the condition: 1-good-neighbors. The work will help engineers to develop more different measures of the nature diagnosability based on application environment, network topology, network reliability, and statistics related to fault patterns.
-
-
-
Restricted Version of Paths Avoiding Forbidden Pairs - Complexity and Algorithm
Authors: Eva Milkova and Karel PetranekBackground: NP-complete problems appear in a wide variety of industrial applications and have a high number of forms, as described in various patents. The problem of finding paths avoiding forbidden pairs of vertices which originates in graph theory is known to be NP-complete in general. This paper focuses on an intuitive comprehension of the problem restricted to a special graph and its hardness. Objective: It is known that a few restricted versions of the problem of finding paths avoiding forbidden pairs of vertices are polynomial-time solvable. This paper demonstrates a seemingly simple restricted version of the problem that, however, remains NP-complete. Method: We show that while the problem formulation is simple, the intrinsic complexity of the restricted version leads to an exponential solution space. Based on these insights, we construct a proof that proves the problem is NP-complete and demonstrate that the exponential nature of the problem stems from a related problem of building an x-y Shortest Path Tree. Results: We devise an exact algorithm for solving the problem in exponential time and then extend this algorithm to support arbitrary heuristics which can improve the average-case running time of the algorithm. Conclusion: The possible applications of the problem are wide, ranging from program testing and verification to protein folding. We conclude this paper with its application in neural networks to analyse co-activations of artificial neurons in a neural network.
-
-
-
An Image Segmentation Method Based on Renyi Relative Entropy and Gaussian Distribution
Authors: Fangyan Nie, Pingfeng Zhang, Jianqi Li and Tianyi TuBackground: Image segmentation is a necessary prerequisite for many higher level computer vision tasks, such as object recognition, image understanding, and image retrieval. However, the segmentation problem is inherently ill-posed due to the large number of possible partitionings for any single image. So, image segmentation remains one of the major challenges in image analysis. There are also many patents on these problems. This paper introduces a new image segmentation method to readers, and the effectiveness of the new method is demonstrated by experiments. Method: Image segmentation is an important step to obtain quantitative information in image processing and techniques based on thresholding are popular used in practical application. Gaussian distribution is a parametric statistical model, which is frequently employed to characterize the statistical behavior of a process signal in industry. This paper considers the Gaussian distribution to approximate the histogram distribution of an image. A new histogram thresholding segmentation method is presented based on Renyi relative entropy. Results: The experimental results for non-destructive testing image and other type images demonstrate the success of the proposed method, as compared with the alternative thresholding methods. Conclusion: Based on the measurement of Renyi relative entropy and Gaussian distribution, a new image thresholding approach is presented in this paper. The experimental results on various kinds of image show that the proposed method is valid for the task of image segmentation.
-
-
-
An Efficient Adaptive Broadcast Protocol for Different Scenarios in VANETs
Authors: Qiubo Huang and Fei LiuBackground: VANETs demand a fast and reliable data transmission in real time for safety concerns, which is challenging due to vehicles’ fast movement, network topology dynamics, and communication network varieties. This paper reviews current scientific and patent literature and proposes an efficient and adaptive broadcast protocol using neighbor nodes ’distances and driving directions to determine packet forwarding nodes. The protocol is capable of discovering cross roads utilizing neighbor nodes’ moving direction information, therefore suitable for broader application scenarios including intersections and straight road. Methods: We use periodic hello packets to establish neighbor list, and use information of neighbors, including the driving direction of neighbors and neighbor distance, to select a forwarding node. We propose a hybrid of distance and neighbor list forwarding broadcast protocol. Results: This paper focuses mainly on reducing broadcast delay and improving dissemination range in VANET and avoiding impacts of the network topology changes. The density of road vehicles is an important factor for time-critical safety applications. Our OPNET Modeler simulation results demonstrate the following advantages of the protocol: less broadcast packets, broader transmission range, and higher channel utilization. Conclusion: Our scheme can not only effectively reduce the number of redundant data packets, but also broadcast the packets towards more road traffic intersections. The simulation results show that out scheme can forward the packets to all road segments with minimum overhead and highest successful packet delivery rate. So, our broadcast protocol can be applied to various scenarios in VANETs.
-
-
-
A Duplicate Data Detection Approach Based on MapReduce and HDFS
Authors: Fang Wei, Wen Xue-Zhi and Zheng YuBackground: With the surge in the volume of collected data, deduplication will undoubtedly become one of the problems faced by researchers. There is significant advantage for deduplication to reduce storage, network bandwidth, and system scalability of coarse-grained redundant data. Since the conventional methods of deleting duplicate data include hash comparison and binary differential incremental. They will lead to several bottlenecks for processing large scale data. And, the traditional Simhash similarity method has less consideration on the natural similarity of text in some specific fields and cannot run in parallel program with large scale text data processing efficiently. This paper examines several most important patents in the area of data detection. Then, this paper will focus on large scale of data deduplication based on MapReduce and HDFS. Methods: We propose a duplicate data detection approach based on MapReduce and HDFS, which uses the Simhash similarity computing algorithm and SSN algorithm, and explain our distributed duplicate detection workflow. The important technical advantages of the invention include generating a checksum for each processed record and comparing the generated checksum to detect duplicate record. It produces the fingerprints of short text with Simhash similarity algorithm. It clusters the fingerprint results using Shared Nearest Neighbor (SNN) algorithm. The whole parallel progress is implemented using MapReduce programming model. Results: From the experimental results, we conclude that our proposed approach obtains MapReduce job schedules with significantly less executing time, making it suitable for processing large scale datasets in real applications. The experimental results show the proposed approach has better performance and efficiency. Conclusion: In this patent, we propose a duplicate data detection approach based on MapReduce and HDFS, which uses the Simhash similarity computing algorithm and SSN algorithm. The results show that the new approach is applied to MapReduce, which is suitable for the document similarity calculation of large scale data sets, which greatly reduces the time overhead, has higher precision and recall rate, and provides some reference value for solving the same problem in large scale data. The invention is also applied to large scale duplicate data detection. And it is a good solution for large scale data process issue. In the future, we plan to design and implement a scheduler for MapReduce jobs and new similarity algorithm with the primary focus of large scale duplicate data detection.
-
-
-
Realization of a Self Powered Road Side Unit Using Network Processor Technology
More LessBackground: Vehicular ad hoc networks (VANET) is a talented system which allows different applications such as road traffic safety, Internet access, advertising and multimedia streaming, as described in the various patents as well. VANETs can work according to two modes of communications, Vehicle to Vehicle (V2V) and Vehicle to Infrastructure (V2I). In the second case, a permanent communication infrastructure, i.e. Road Side Units (RSUs) can be used to undertake the various safety messages or to act as a gateway to the Internet. From the roadside infrastructure viewpoint, the majority of the previous works suppose metropolitan situation where the conventional wired power is offered at a rational rate. However, this assumption is not always true in practical. Our goal here is to propose a self powered VANET infrastructure and the major player in such an infrastructure is the road side unit. We propose that the RSUs can gather the energy required for their work from the surrounding environment, particularly solar energy. Such a proposal allows to set up the RSUs in any location without taking into consideration the power supply availability and hence, a wider area is covered by the VANET infrastructure. Methods: UBICOM IP2022 network processer platform is adopted to implement the proposed Road Side Unit (RSU). Comprehensive design steps were described and the values of the system components (i.e., the number of the solar panels, the capacity of the battery cells, etc.) were given in details. In order to reduce the power utilization of the recommended RSUs and to lengthen the duration of their batteries, a new power management scheme called Event Driven Duty Cycling (EDDC) is suggested and implemented. This technique makes use of an important feature in the Ubicom board "Clock Stop Mode"; in which the system clock may be disabled which disables the CPU core clock and hence, the Ubicom board. When the system clock is disabled, the interrupt logic continues to function, and a Sleep timer is enabled to keep running. The recovery from the clock stop mode (SLEEP Mode) to the normal execution is possible using Sleep timer interrupts or in response to an external interrupt form the WLAN NIC. Results: The results presented in the paper declare that a RSU (when working in SLEEP mode) needs less number of paralleled solar panels and smaller batteries to run multiple VANET applications. These results prove the effectiveness of the suggested power management scheme to extend the life of the solar energy harvested-battery based road side units and their serviceability which will be reflected positively on building a reliable and an available VANET infrastructure. Compared to the other duty cycling methods, the suggested EDDC method has the best performance in terms of data loss and extra power consumption. The main reasons behind this enhanced performance are firstly the precise choice of the sleep periods (according to the timing requirements of the scheduled tasks) and secondly the fast wake up manner (in response to an external WLAN NIC interrupt) which guarantees the reception of all incoming packets without loss. Conclusion: This paper presents a novel design of self powered Road Side Units (RSUs) using the network processor technology. We suggest a new power management method, we called Event Driven Duty Cycling (EDDC), to manage the solar energy stored in the RSUs' battery cells. RSUs are essential devices which undertake significant responsibilities in the VANET's communication infrastructure. The permanent functionality of these devices assures the accomplishment of the VANET's establishment purposes. On the other hand, the capability to gather the power from the surroundings symbolizes an imperative technology district which eliminates wires and battery preservation for VANET's purposes and allows deploying self powered RSUs. The combination between these technologies and the intelligent administration of the energy resources forms a firm establishment to institute a dependable and green VANET communication infrastructure.
-
-
-
An Innovative Model-Driven SlicingApproach for Testing Adaptive Software
Authors: Sanaz Sheikhi and Seyed Morteza BabamirBackground: Adaptive software adapts its behavior based on the dynamic changes of its environment and users' requirements. Due to the high complexity of adaptive software, a program slice of adaptive software makes it easier to testing, debugging, and comprehension, also described in various patents. Specifically, program slicing is very useful for testing, which is a critical and costly step in lifecycle of adaptive software. Method: In this paper, a model-based method for slicing adaptive software is proposed and then test cases are generated using a new coverage criterion called satisfaction. Our method uses a requirement based model of adaptive software called the Techne model as a dependency graph whose nodes and edges are propositions and relations of the program model respectively. The Satisfaction criterion is defined according to the independent path of a slice consisting of elements of the program's Techne model: hard-goals, soft-goals, quality constraints, domain assumptions and tasks. Results: Assessing the proposed method by several test scenarios indicates the proposed method simplifies the complexity of program slicing and increases concentration on goal and requirement satisfaction which is the primary intention of adaptive software. Conclusion: Although slicing methods have been best known for the analysis of program code and structure, our proposed approach for slicing adaptive software scrutinizes the adaptive software model denoting the software behavior. On the other hand, we choose coverage criterion of the approach based on essential parameters of adaptive software, which are various types of goals to be satisfied. Considering the coverage criterion, test cases are tailored for the slicing approach.
-
-
-
Safety Risk Assessment of Human-computer Interaction Behavior Based on Bayesian Network
Authors: Zhang Yanjun, Sun Youchao, Zhang Yongjin and Lu ZhongBackground: Safety risk assessment of human-computer interaction behavior is very important for safety design and management in complex human-computer system such as airplane cockpit and monitoring system in nuclear power plant. Many major accidents were a result of lack of efficient hazard identification and risk assessment approaches, as described in various patents. Methods: The process of human-computer interaction risk assessment is established. The event tree analysis is employed to identify the hazard to the system and human beings. The dynamic safety risk assessment model is provided based on Bayesian network (BN) considering the uncertainty and correlation of the human-computer interaction behaviors. The safety risk level could be determined by matching the risk matrix with the assessment results on severity and probability. Results: An illustration, which takes a typical human-computer system as the study object, shows that the approach proposed in this paper is suitable and efficient for safety risk assessment of humancomputer interaction behaviors. Conclusion: With the development of technologies, the safety risk of the systems with human in the loop has received increasing attention. Safety risk assessment of human-computer interaction behavior is a very important part for safety design and management in complex human-computer system such as cockpit of vehicles and monitoring systems in nuclear power plant so that the user experience would be enhanced and safety risk would be controllable.
-
-
-
A Novel Method for Intuitionistic Fuzzy MAGDM with Bonferroni Weighted Harmonic Means
Authors: Weihua Su Mail, Jinming Zhou, Shouzhen Zeng, Chonghui Zhang and Kaifeng YuBackground: Multiple attribute group decision making (MAGDM) is the common phenomenon in modern life, which is to select the optimal alternative(s) from several alternatives or to discover their ranking by aggregating the performances of each alternative under several attributes, in which the aggregation techniques play an important role, also described in various patents. Methods: We extend the Bonferroni mean (BM) to intuitionistic fuzzy environment and defined a trapezoidal intuitionistic fuzzy Bonferroni harmonic mean (TrIFBHM). Some special properties are discussed based on the trapezoidal intuitionistic fuzzy Bonferroni harmonic element (TrIFBHE). Results: Considering the weight vector of the arguments, we develop a weighted trapezoidal intuitionistic fuzzy Bonferroni harmonic mean (WTrIFBHM) and apply it to Multiple attribute group decision making (MAGDM). Furthermore, a weighted trapezoidal intuitionistic fuzzy order Bonferroni harmonic mean (WTrIFOBHM) has been defined, which is an effective tool to reveal the importance of each attribute and the correlations among them in MAGDM problems. Finally, we develop an application of the new approach and illustrate it with a numerical example. Conclusion: The developed approaches for MAGDM under intuitionistic fuzzy environment have been illustrated with a practical application. The main advantage of the WTrIFBHM is that it can explore the importance of each attribute and the interrelationship between attribute reflected by the TrIFBHEs, give prominence to arguments performance too high or too low, and allow the decision makers to assign all the possible values to the alternatives so as to obtain reasonable results with more information taken into account.
-
Most Read This Month
