Abstract:In what way to efficiently and accurately look for the causal genes related to the corresponding diseases becomes a hot spot in searching. When no control experiment can be applied, we usually use causal discovery method to detect causal genes. However, the traditional independence tests in high-dimensional data have high time complexity and low accuracy. To alleviate this problem, we propose a residual independence test algorithm that combines partial correlation test and linear residuals independence test to compress the search space of the conditional set of CI(conditional independence) test, and improve the accuracy. Next, we design a causal discovery algorithm based on residual independence test, which can distinguish Markov equivalence classes by V-structure and causal functional model, we apply it to real cancer datasets in the detection of pathogenic genes. The results show that proposed algorithm is significantly better than existing algorithms in many aspects.
Keywords:causal network;causal inference;conditional independence;causal gene;CI(conditional independence) test
Abstract:In the “cloud-edge-end” new architecture of the Internet of Things (IoT), the challenges of data protection are highlighted. Blockchain technology is regarded as a potential candidate, which realizes the multi-party consensus and privacy protection of data on the open network. However, its high-loss disordered competition mode and low-efficiency trustless mechanism make it impossible to apply in the resource-constrained environment of the IoT. This paper proposes an evolutionary multi-stage game optimization model, which avoids vicious competition through the intelligent perception and sorting, accumulates credibility through asynchronous verification, and positively motivates the preferred nodes. In particular, based on the end-edge-cloud collaborative optimization, the model realizes the data block recycling to reduce the workload of verification. Based on the utility function, it designs a multi-role intelligent collaboration to make the best use of the things. Finally, simulations and a IoT blockchain testbed are built. The experimental results show that the lightweight consensus and verification processing improves efficiency and adaptability.
Keywords:complete information cooperative game;two stage stackelberg game;blockchain in the Internet of Things
Abstract:To address the acceleration of convolutional neural networks in IoT(Internet of Things) devices, we propose a customized acceleration processor RCP (RISC-V CNN Processor) for convolutional neural networks based on the RISC-V architecture, which accelerates the convolutional computation from a hardware perspective through the customized processor technology. In this paper, we design a five-stage pipeline design based on the RISC-V processor, and provide solutions to data and control conflicts on the pipeline. The proposed design also customizes four instructions MLAD/MSTORE/MMUL/MPOOL for large convolutional computations to accelerate the convolutional operations of the RCP processor; verify the custom instruction set of the RCP processor, and test the functionality of the RCP processor by running a convolutional neural network to test the functionality of the RCP processor. The experimental data shows that with the use of the custom instruction set technology, the RCP processor processes the experimentally verified convolutional neural network model 3.38 times faster, effectively accelerating the convolutional neural network in IoT devices.
Keywords:IoT(Internet of Things);convolutional neural networks;custom processors;accelerate
Abstract:Aiming at the security problem of blockchain system caused by the malicious behavior of consensus nodes in the blockchain network, a dynamic trust proof mechanism (PoDT-LSTMB) based on LSTM (long short-term memory) and Blacklist is proposed. The dynamic trust proof mechanism learns and analyzes the behavior data of participating consensus nodes through the two-layer LSTM neural network of the forward attention mechanism, and predicts the behavior tendency of nodes. A blacklist is built based on node trust, eliminating nodes below the trust threshold to improve the overall trust of nodes in the entire network. Taking the normal block chaining rate and the change of node trust as the main evaluation indicators, we conducted comparative experiments with the PoT (Proof of Trust) mechanism and the PoDT-LSTM mechanism without blacklists. The experimental results show that the accuracy of the two-layer LSTM neural network based on the forward attention mechanism can reach 0.915 1. The PoDT-LSTMB mechanism proposed in this paper improves the chain-up ratio of normal block by 30%~33% over the PoT mechanism.
Abstract:Efficient network selection method is the key to ensure multi-user QoS experience in heterogeneous vehicular network (HVN) environment. However, existing methods usually select the optimal access network from the perspective of optimizing individual vehicles, it is easy to cause uneven distribution of network resources and lead to partial network congestion. Aiming at the above problems, an adaptive clustering and evolutionary game based network selection method for HVN, namely AENS, is proposed. First, the method adopts an adaptive clustering way to reduce the number of vehicles directly connected to the network, thereby effectively reducing the probability of network congestion under dense traffic conditions. Then, the FAHP and CRITIC methods are used to calculate the subjective and objective weights of candidate network attributes respectively, so as to obtain more accurate comprehensive utility values of candidate networks. Finally, the network selection of vehicles is abstracted into an evolutionary game model based on replication dynamics, and introduce memory effects to speed up their convergence, so that they can obtain the overall optimal network selection strategy set by updating strategies. The experimental results show that in a HVN environment integrating 5G/6G communication, AENS method can effectively reduce the number of network handovers, improve the network throughput and balance the network load. It achieves load balancing while improving the utilization of network resources, and its advantage is more obvious in dense traffic.
Abstract:Aiming at the problems of low data calculation efficiency and high energy consumption of sensing nodes in the existing two-layer wireless sensor network range query, a two-layer wireless sensor network range query calculation method based on the optimized Paillier algorithm is proposed. First of all, the verifiable optimized Paillier method is used to encrypt the sensing data, and the data operation under the ciphertext is realized under the premise of ensuring data security and privacy, and the computing platform is transferred from the query node to the storage node to improve the efficiency of data operation. Secondly, a low-power numerical comparison method based on left-most 0-1 encoding and HMAC data digest algorithm is proposed, which can reduce the energy consumption of sensing nodes under the premise of ensuring the stability of data comparison. Finally, the specific design and implementation of the method is given, and the sensing node is constructed by using the Raspberry Pi, temperature, humidity and light intensity sensors, and the storage node is constructed by using the NVIDIA TX2 edge computing platform, so as to build an experimental platform, and the range query calculation method implemented in the platform is transplanted and implemented. Compared with the existing methods in terms of energy consumption of sensing nodes and data computing efficiency, the results show that the method in this paper can improve the efficiency of data computing on the basis of reducing the energy consumption of sensing nodes.
Abstract:Test cases play a significant role in software testing, which is a vital method for guaranteeing the reliability and security of embedded operating system. According to the existing result, the knowledge of historical test cases cannot be completely utilized, and the reuse function of test cases in traditional situations is weak. Aiming at these defects, a recommendation model based on knowledge graph for embedded operating system test cases reuse is proposed. Firstly, this paper uses knowledge graph to store and retrieve data with complex relationships. Secondly, the ontology model is designed and the domain knowledge graph is created based on the entities and relationships extracted from historical test cases. Then, this paper chooses unsupervised contrastive learning natural language processing methods for matching Chinese text similarity. Finally, a reuse recommendation model about embedded operating system test cases has been built. Experiments demonstrate that the ontology model proposed in this study can assist testers in effectively reusing test cases and achieve an 94.305% coverage rate, which significantly reduces testing costs and has a significant impact on engineering applications.
Abstract:Providing effective protection for iris data is of great importance for the application and popularization of iris biometrics. For the high accuracy but low irreversibility local ranking-based iris template protection and its vulnerablility to ranking inversion attack, two improved schemes based on ordinal value fusion strategy are proposed. The original iris data are first XORed with l different application-specific parameters and ranked to obtain the corresponding ordinal values. After the fusion of the ordinal values, the two improvement schemes further enhance the irreversibility by re-ranking or modulo operations, respectively. The experimental results show that the first improved scheme can achieve an accuracy reduction of 2% or less and an irreversibility index improvement of more than 15% compared with the local ranking scheme, and the second improved scheme can achieve an irreversibility index improvement of more than 30% and an accuracy reduction of 10% or less.
Keywords:cancelable iris biometric;non-invertible transform;improved local ranking;ordinal value fusion
Abstract:Aiming at the problems of poor transmission efficiency, low real-time performance and high packet loss rate of data collected by mine sensors at present, a mine autonomous cruise method of unmanned aerial vehicle(UAV) based on deep reinforcement learning was proposed to effectively collect data of Internet of Things nodes. Specifically, the method takes UAV as the transmission intermediary, and according to the different values of data generation cycle of mine IOT node, Twin Delayed Deep Deterministic policy gradient algorithm (TD3) is used to realize the optimal path planning of UAV. At the same time, the algorithm also considers and designs the environment, reward value, and state information that conforms to the actual mine scene. In particular, we propose a predictive waiting method, which predicts the generation time of data to be collected and determines the target node. The UAV goes to the target node and waits in advance within the signal coverage range to obtain the data generated by the mine sensor in real time. The experimental results show that the UAV can realize the optimal path planning and collect node data by autonomous decision. When the training round is 700, the reward value reaches the peak, the algorithm reaches convergence and has excellent performance.
Abstract:The recognition of encryption algorithms is of great significance to the research of cryptographic analysis. At present, scholars have carried out some research and made some progress in this field. However, there are few theoretical studies on Hash function recognition. In this paper, the randomness detection features are further mined. The Euclidean distance is used to screen out the three detection items that have the most distinguishing degree to the Hash function. Based on the core concerns of the selected detection items, the feature generation method is reconstructed. Combined with the random forest model, a Hash function recognition scheme based on the combined randomness features is proposed. Through experimental analysis, the recognition scheme is obviously superior to the traditional recognition scheme based on random detection features.
Keywords:cryptanalysis;Hash function;feature extraction;randomness test
Abstract:In the traditional image captioning task, each method only describes the image in a shallow level. Due to the lack of real world knowledge, it is often difficult to mine the logical semantic relationship of objects in a specific context. The introduction of news text brings new possibilities for image captioning, but at the same time, it requires higher learning ability of models; In addition, there are often multiple images in news data, and they are closely related to each other, which makes the existing single image captioning methods not suitable for news image set captioning task. To solve the above problems, this paper proposes a news image set captioning method based on image and text bidirectional guidance attention, i.e., ITBGA, which takes the image set as the research object and the corresponding news text as the background knowledge. Based on ITBGA, it realizes the cross modal information interaction at coarse and fine granularity respectively, and uses the pointer network to assist the generation of named entity words. The experimental results on the news image set constructed in this paper show that ITBGA can improve the quality of the description text, and the method achieves optimal performance on CIDEr indicator.
Keywords:image captioning;ITBGA(image and text bidirectional guidance attention);image set;news text
Abstract:As a significant task in Machine Reading Comprehension, multiple choice has received widespread attention in Natural Language Processing(NLP). Since the length of text that needs to be processed in data continues to be longer, long text multiple choice becomes a new challenge. However, existing long text processing methods tend to lose useful information in the text, leading to inaccurate results. To solve the above problems, this paper proposes the Long Text Multiple Choice Answer Method Based on Compression and Reasoning (LTMCA), which identifies relevant sentences by training the judgment model, and combines the relevant sentences to form a short text to be input into the inference model for inference. In order to improve the accuracy of the evaluation model, the interaction between the essay and the options is added to the evaluation model to supplement the essay's attention to the options, and the relevant statements are identified in a targeted way to complete the multiple-choice answer task more accurately.Experimental verification is carried out on the CLTMCA Chinese long text multiple choice dataset constructed in this paper, and the results show that the proposed method can effectively solve the problems of the BERT in handling the long text multiple choice task. Compared with other methods, this method has greatly improved in various evaluation indicators.
Keywords:BERT(bidirectional encoder representation from transformer);Chinese long text;multiple choice;attention
Abstract:The traditional aging detection methods based on trend analysis may have high false alarms. In addition, the multi-version detection methods have been proposed to differentially analyze the previous robust version as the baseline version from the software version to be tested, the presence of software aging in the baseline version is ignored. To address the above issues, a workload-related software aging detection method based on differential analysis is proposed to detect software aging. The method applies different intensities of load to the software under test. It helps developers to detect software aging of a single version (without a priori knowledge) during the software development process by monitoring the difference of memory resource consumption (resident set size, RSS) trend and analyzing its relationship with the load difference. The results show that the proposed method is capable of detecting software aging caused by memory leaks, and can apply our method to existing commercial software Squid.
Abstract:The embedding representation of temporal knowledge graphs is one of the hotspots in the field of knowledge engineering. Existing temporal embedding models mostly integrate time information into some static embedding models in different ways to learn the temporal evolution process of entities and relationships. However, it is difficult to mine and learn some fine⁃grained temporal correlation information. Therefore, based on previous research, we propose a temporal graph embedding representation model of contextual temporal correlation in the complex space, which subdivides fine⁃grained temporal information into the relevance of knowledge start time and the consistency of knowledge time intervals. A context⁃aware temporal correlation information mining method is designed to select semantically similar contextual quadruples, mine the temporal correlation information contained in the training quadruples and contextual quadruples, and enhance the embedding model's learning of fine⁃grained temporal information. The proposed method is experimented with two public temporal knowledge graph datasets, YAGO11k and Wikidata12k, and the results show that compared with existing methods, our method has certain improvements in MRR (mean reciprocal rank) and Hits@k (k=1,3,10) indicators.
Abstract:Recently, with the rapid development of 5G mobile communication technology and artificial intelligence, the Internet has been in a period of data explosion. This situation poses severe bandwidth and energy consumption problems for the traditional cloud computing model storing and processing massive amounts of data. Moreover, Mobile Edge Computing (MEC) emerges as a new computing model which can effectively solve the significant bottleneck problems that arise in the traditional cloud computing model. Task offloading service is one of the core applications of MEC. Focusing on the field related to task offloading of MEC, this paper first introduces the basic concepts, model architecture, and application scenarios of MEC and then summarizes and compares the research achievements of task offloading schemes from three different offloading objectives. Finally, analyzing the different types of privacy threats in the task offloading process, it reviews the existing works and summarizes future research challenges.
Abstract:As an emerging computing paradigm, edge computing transfers computing resources from the cloud center to servers at the edge of network to provide computing support for end devices connected to the Internet. At the same time, the rapid development of artificial intelligence (AI) represented by deep neural networks has been widely applied to industrial Internet of Things, smart city, smart home and other fields, benefiting people's production and life. Edge computing and AI empower each other and give birth to a new research field---edge intelligence (EI). In EI, edge computing and AI benefit from each other, edge computing uses AI to maintain and manage edge-side devices, and AI provides intelligence services on edge. In this paper, we elaborate the development status of EI from the above two aspects and summarize the challenges of EI, and then make a prospect for future development.