Named Entity Recognition (NER) plays a vital role in natural language processing.It is the basis of natural language tasks such as relational extraction,machine translation,text summarization,and question-answering systems.Chinese Named Entity Recognition (CNER) is a NER scheme in the Chinese context.Existing literature only analyzes the different stages of the technology,without a systematic and in-depth summary of deep learning-based methods.This paper introduces the methods based on deep learning in detail from the four aspects of model classification,data sets,evaluation criteria,and performance analysis.Finally,we discuss the challenges and opportunities for CNER.
The existing task assignment schemes in spatial crowdsourcing mainly focus on point tasks,and there is less research on regional task assignment,and most of the research is based on offline task assignment.However,in order to realize the real application of regional tasks,the proposal of online distribution scheme is more important.This paper provides an effective online task assignment scheme for regional tasks,which can maximize the overall quality score of regional tasks under budget and time constraints.First,a pre-allocation algorithm(PA)is proposed to pre-allocate workers and tasks through historical data.Then,an online cross-regional assignment algorithm(CRMW)based on mobile workers is proposed,and a multi-round allocation mechanism is designed to improve the success rate of task allocation.The algorithm distributes between the same area and all areas in different rounds,and adopts an incentive mechanism based on the original quality ratio,thereby further improving the hit rate of the allocation algorithm.Finally,the regional task decomposition algorithm(RTDA)is proposed to decompose the tasks into sub-tasks,and the appropriate workers are selected for the sub-region tasks by optimizing the particle swarm algorithm.This paper compares the quality score and running time through comparative experiments on real data sets,and shows that the algorithm in this paper is effective in improving the quality score.
The number and types of new entities in the biomedical field are growing rapidly.With the limited capacity of the pre-training vocabulary,character embedding can solve the problem of the word out of vocabulary to a certain extent.The potential representation of character embedding extracted by a single character-level feature extractor has certain limitations.To solve this problem,a biomedical named entity model based on character level feature adaptive fusion is proposed.Firstly,the Convolutional Neural Network(CNN)and Bidirectional Long Short-Term Memory(BiLSTM)network are used to extract the character vector of the text.During the training process,the weight of two type of character vectors of the text word is dynamically calculated,and then splice the two types of character vectors,so that the model makes more full use of the information in the character granularity,and adds the part of speech information and chunking information as additional features;The pre-trained word vector,character level features and additional features are spliced and input into the BiLSTM0CRF neural network model for training.The results show that the average F1 value of the proposed model on NCBI-disease and Biocreative Ⅱ GM corpus is 87.14% and 81.04%,which effectively improves the effect of biomedical named entity recognition.
With the development of science and technology,the amount of data increases explosively.Mining valuable information from data become a research hotspot in various industries.The accurate classification of the types of tourist attractions is of great significance to promote the development of cultural tourism industry.In this paper,the heterogeneous information network of scenic spots is constructed by integrating scenic spot reviews,and the SGAE model is proposed.Firstly,the description of some domestic 5A and 4A scenic spots and the review data of scenic spots are crawled from tourism websites and encyclopedia websites.Through the processing and analysis of the data,10 related topics are mined from the reviews,and a heterogeneous information network which composed of scenic spot names,scenic spot reviews and review topics is constructed;Secondly,different types of node information are mapped to the same space,and the layer-wise propagation rules of heterogeneous graph convolution are constructed.Then,depending on the impact of different types of neighbor nodes and different nodes on a special node is different,the double-layer attention is introduced into the heterogeneous graph convolution network,and the SGAE model is proposed to learn the low-dimensional feature representation of scenic spot names,and the scenic spot types are determined by normalization through Softmax function.Finally,compared with the classical classification algorithm on the scenic spot data set,the accuracy and F1 value of the proposed SGAE model are improved by 5% and 4% respectively compared with the current optimal method.On the public data set AGNews and MR,the performance of SGAE model is better than all comparison models,and compared with the HGCN-RN model with the best classification effect,the accuracy and F1 value of SGAE on AGNews are improved by 1.9447% and 1.975% respectively,On MR,the accuracy and F1 value are improved by 3.92% and 6.96% respectively,which fully verifies the effectiveness of the proposed algorithm in classification task.In short,for the problem of scenic spot classification,the SGAE model proposed in this paper effectively improves the effect of scenic spot classification,and has a good application prospect.
Domain boundary prediction is an important problem in the study of protein structure and function.Aiming at the limitation of low accuracy of most current domain boundary prediction methods,a networkflows based protein domain boundary prediction algorithm GraphDom is proposed.The algorithm converts the protein domain boundary prediction problem to a network flow segmentation problem.The predicted residue contact distances are converted into a protein capacity diagram according to the designed edge capacity transform formula,and then the protein residual capacity diagram is obtained by the Ford-Fulkerson algorithm.The depth-first algorithm and backtracking algorithm are used to obtain a strongly connected component diagram and enumerate all feasible minimum cuts according to the protein residual capacity diagram.Finally,the domain boundary evaluation function is designed based on the general properties of the domain to evaluate the divided partitions and then determine whether to continue the recursive division.The effectiveness of GraphDom is demonstrated on 120 non-redundant test proteins compared with 3 mainstream methods.
The massive parameters are needed to read and write repeatedly and frequently during the training of DNN.NVM has high read and writing speed and is an effective means to improve the parameter accessing efficiency of DNN training.However,the existing NVM file system generally uses the file-based locking mechanism to cope with the complicate read and write request for upper-layer applications of the operating system,and it becomes the bottleneck of massive parameters accessing in DNN training by using multiple concurrent read and write threads.This paper aims at the characteristics of DNN training and the challenges of the I/O software stack in NVM,the fine-grained locking strategy based on concurrent threads and the concurrent I/O mechanism based on two-layer logs were designed and implemented the prototype of DNN-oriented high-concurrent NVM file system named DNNFS was implemented based on NOVA.Filebench and Fio were used to test under several types of workloads.The results show that DNNFS can improve IOPS by up to 35.8% and I/O bandwidth by 21.6% compared to NOVA.
For sequence tagging tasks such as Chinese word segmentation and part-of-speech tagging,this paper proposes a joint method for Chinese word segmentation and part-of-speech tagging that combines BERT model,BiLSTM (bi-directional long-short term memory model),CRF (conditional random field model),Markov family model (MFM)or tree-like probability (TLP).Part-of-speech tagging method based on HMM (Hidden Markov Model)ignores the emission probability of the word itself to the part-of-speech.In part-of-speech tagging based on MFM or TLP,the part-of-speech of the current word is not only related to the part-of-speech of the previous word,but also related to the current word itself.The use of the joint method helps to use part-of-speech tagging information to achieve word segmentation,and organically combining the two is beneficial to eliminate ambiguity and improve the accuracy of word segmentation and part-of-speech tagging tasks.The experimental results show that the joint method of Chinese word segmentation and part-of-speech tagging used in this paper can greatly improve the accuracy of word segmentation compared with the usual word segmentation model based on BiLSTM-CRF,and it can also greatly improve the accuracy of part-of-speech tagging compared with the traditional part-of-speech tagging method based on HMM.
Social recommendation based on graph neural network are methods with better performance in existing models.It can alleviate the problem of data sparseness by mining graph structure information.However,most existing models only consider shallow semantic context information,which makes GNN difficult to learn high-quality user/item embedding representations.For this reason,this paper proposes a method for predicting user interest combined with semantic enhancement.The model builds a semantically enhanced user network and item network by learning the semantic relationship in the useritem bipartite graph,then sends them and the social network to the connection-aware graph neural network for the perception and aggregation of deep context information.The perception splices the generated user interest and item attribute embedding representations,and finally predict the interaction probability between the target user and the candidate item.A series of simulation experiments were performed on two public datasets of Ciao and Epinions.The experimental results showed that the model has an average improvement of 3.55% and 2.21% in Recall@K and NDCG@K(Normalized discounted cumulative gain)compared to the optimal baseline.It verifies that the effectiveness of the algorithm has been improved after semantic enhancement and context-aware aggregation.
CGAN(Conditional Generative Adversarial Network)can learn the distribution characteristics from the data and generate new samples that conform to the original data distribution.Using it as an oversampling method can improve the classification performance of imbalanced data.However,when the minority sample size is small It is difficult to ensure that CGAN fully learns its distribution characteristics,which in turn leads to poor quality of synthesized samples.For this reason,an imbalanced data ensemble classification algorithm based on improved CGAN is proposed.Firstly,SMOTEENN(edited nearest neighbor oversampling)is used to quickly generate minority class samples and make them reach a certain scale and train a CGAN model that can fully learn the characteristics of data distribution,then regenerate minority class samples that conform to the original data distribution to build a balanced dataset.Finally,using the CART decision tree as the base classifier,improve the Adaboost method and train the balanced dataset to obtain the final classification model.The F1 value,AUC and Gmean are selected as evaluation indicators.The experimental results on 8 public data sets show that the proposed method can significantly improve the classification accuracy of imbalanced data.
Query graph selection realizes the semantic matching between question and candidate query graphs for knowledge base question answering(KBQA) based on query graph.And it selects the optimal query graph to generate answers.Due to the inconsistency in the form of question(sequence structure) and candidate query graph(graph structure),the matching often suffers from complex encoding structure and poor matching performance.To tackle the above problems,we propose a query graph selection method based on sequence matching.Specifically,we linearize the query graph into a sequence and translate the matching between the question and the query graph into the matching between two sequences.Besides,we propose a new query graph ranking model by considering the global information of the candidate query graphs.Compared with previous methods,the proposed method not only effectively models the interactive information between question and query graph,but also introduces the global information of candidate query graphs to improve the performance of query graph selection.The experimental results show that the F1 of this system on WebQuestions and ComplexQuestions are 55.3 and 44.4 respectively.
Recently,knowledge base question answering(KBQA) technology cannot effectively deal with complex questions because it is difficult to understand complex semantics.For a complex question,first decomposing and then integrating is an effective method to parse complex semantics.However,in the process of question decomposition,there are often cases of misjudged entities or missing subject entities.As a result,the decomposed sub-questions do not match the original complex questions.To address the above problems,this paper proposes a decomposed semantic parsing method that incorporating fact texts.The processing of complex questions is divided into three stages:decomposition-extraction-parsing.First,decompose the complex questions into simple sub-questions,then extract the key information in the question,and finally generate the structured query.At the same time,this paper constructs the fact text database,transforms the triples into sentences described by natural language,and adopts the attention mechanism to obtain more abundant knowledge.Experiments on the ComplexWebQuestions dataset show that the proposed model outperforms other baseline models.
For robot path planning in dynamic environments,the artificial potential field method(APF)is easy to fall into the local minimum trap; the reinforcement learning deep double Q network(DDQN)algorithm has the problems of excessive blind exploration,slow convergence and uneven planning path.This paper proposes A dynamic environment robot path planning algorithm(PF-IDDQN)based on artificial potential field method and improved DDQN.First,the artificial potential field method is introduced into the improved DDQN to obtain the initial global environment information,and the reward module is optimized; secondly,four directional factors are added to the algorithm state set to improve the smoothness of the planned path; finally,the dynamic environment The training simulation below.The results show that the robot can reach the target position within a limited number of explorations in a dynamic environment,which verifies the effectiveness of the proposed algorithm.
Database indexing is one of the important ways to improve the query performance of database.In this paper,an index selection method based on deep reinforcement learning is proposed,which can realize the selection of single-column index and multi-column index.The method combines the index selection problem and deep reinforcement learning by modeling the index selection process as a Markovian decision process,firstly,using index evaluation rules to generate candidate indexes,thus reducing the dimensionality of the neural network,and being able to generate multi-column indexes.By defining the state representation of the database environment,the actions of the agent and the reward function during the deep reinforcement learning process,the possible interactions between the indexes are fully considered,and the optimal indexes under the given workload is selected.The experimental results show that the indexes selected by the proposed index selection method can significantly improve the query performance of the database system compared with the indexes selected by the current classical index selection method.
Embedding the semantic and structural information of heterogeneous graphs into low dimensional space is the key to solve the problem of putting heterogeneous graph data into machine learning algorithm efficiently.However,the existing heterogeneous graph neural networks ignore higher-order neighbors and avoid learning complex structural information.Therefore,this paper proposes a heterogeneous graph neural network model(HONG)that aggregates higher-order neighbors.Firstly,a higher-order subgraph based on meta-path and a pooling operator HetRepPool for heterogeneous graphs are proposed,and combined with GCN to learn complex structural information.Secondly,HAN is used to learn semantic information based on met-path.Finally,the embedded representation of nodes is obtained through the attention mechanism to achieve heterogeneous graph embedding.The experimental results show that compared with other graph neural networks(GCN、GAT、GraphSAGE 、HetGNN、HAN、GAHNE),HONG increase the average Micro F1 by 3.88% and Macro F1 by 4.13% in heterogeneous graph node classification task,the average ARI by 12.66% and NMI by 12.02% in heterogeneous graph node clustering task.
The purpose of knowledge graph completion is to find models that fully express the semantic associations between entities and relations,so as to predict the missing parts in the triad based on known entities and relations.InteractE model reconstructs the elements of entity and relation embedding through checkered structure,increases the characteristic interaction information between entities and relations,so as to expresses richer semantics between them,it achieves the best effect in the knowledge graph completion method based on convolutional neural network.However,the checkered structure enhances the interaction of features and at the same time disrupts the spatial structure information embedded in entities and relations.To solve this problem,this paper proposes an improved knowledge graph completion method of InteractE—IntSE.IntSE uses SENet to screen useful feature channel information for knowledge graph completion in InteractE feature mapping and suppress useless feature channel information,so as to improve the knowledge graph completion effect,so as to improve the completion effect of knowledge graph.In order to make SENet more suitable for the task of knowledge graph completion,the gate mechanism of SENet was further improved.The results of knowledge graph completion experiment on public datasets FB15k237 and WN18RR show that the performance of IntSE is significantly improved compared with that of InteractE,and IntSE is superior to the mainstream embedded model based on convolution neural network.
With the massive deployment of data centers around the world and the surging demand for cloud computing services,the problem of high energy consumption is becoming more and more serious.How to accurately predict the energy consumption of data centers has become an important research topic.In view of the uncertainty and nonlinear characteristics of server energy consumption in the data centers,a real-time server energy consumption prediction method based on machine learning is proposed in this paper.The random forest algorithm is used to filter the input parameters of the model.The grid search method is leveraged to optimize the hyper-parameters of the model,and the machine learning method is used to build the server power model.Experimental results show that compared with the benchmark algorithm,the average absolute error of the optimized model is reduced by 6.5%,and the average absolute error of the energy consumption model is less than 1.4% after adding the error confidence interval.
The clustering results of traditional fuzzy c-means( FCM )algorithm are easily affected by the random selection of initial clustering centers, and the influence of different features of samples and the importance of samples on the clustering results are ignored in the clustering process. Aiming at this series of problems, a fuzzy clustering algorithm based on information entropy weighting( ANNDPWFCM )combined with adaptive nearest neighbors and density peaks is proposed. Firstly, the automatic search of the initial clustering centers is realized by combining the adaptive nearest neighbors density peaks algorithm( ANNDP ). The nearest neighbors of each sample can be adaptively found for data sets with different scales and structures. The local density of the sample is defined according to the nearest neighbors information, and the density peak points in the data set are searched and found as the initial clustering centers. Then, the importance of different features in the clustering process is distinguished by information entropy weighting. At the same time, the reciprocal of distance between samples is used to weight the sample itself, and the fuzzy clustering centers in the objective function are redefined. Finally, for the objective function, the Lagrange multiplier method is used to alternately optimize the final membership matrix to get the clustering results. Through comparative experiments on different public datasets, it is verified that the ANNDP-WFCM algorithm has fewer iterations and higher clustering accuracy.
Aspect-level sentiment classification is a fine-grained sentiment analysis task that aims to analyze the sentiment of different aspects of a text.To address the problems of low classification accuracy and weak generalization of aspect-level sentiment classification models,Attention-Over-Attention-BERT for aspect-level sentiment classification model based on adversarial learning is proposed.Firstly,the text and aspect words are modeled separately,and the hidden layer features are extracted by BERT coding.secondly,the hidden layer features are put into AOA network to extract weight vector.Finally,the weight vector is multiplied with the modeled text feature vector,and cross-entropy loss,back-propagation parameters are done.In addition,adversarial samples are generated and learned by adversarial learning algorithms as a textual data enhancement method to optimize decision boundaries.The experimental results show that,compared with most deep neural network sentiment classification models,AOA-BERT can improve the accuracy of sentiments classification.Meanwhile,the ablation experiment proves that the structural design of AOA-BERT is reasonable.
The monitoring of unsafe behavior of on-site personnel in substations based on image recognition is of great significance for ensuring the safety of power production.The identification of non-insulated gloves is an important part of safety monitoring,but the small hand/glove area,the small number of valid samples,and the difficulty in identifying people without gloves in multi-person scenarios seriously restrict the performance of the recognition algorithm.Aiming at the above problems,this paper proposes a small target detection and matching algorithm for the detection of the wearing condition of insulating gloves.First,in view of the small number of valid samples in the dataset and the serious imbalance in the number of features,methods such as color transformation and image stretching are used to augment the images in the dataset.Secondly,in view of the problem that the target recognition rate is not high due to the small area of ??the hand/glove,a detection algorithm based on the improved YOLOv3 network is proposed.On the one hand,the feature pyramid structure in the original network is improved,and the multi-level feature information in the network is fused to improve the accuracy of small target recognition;on the other hand,the K-means algorithm is used to analyze the data set to obtain initial candidates suitable for this data set,and further improve the performance of small target recognition.Finally,in view of the problem that the current algorithm only performs recognition without gloves,but it is difficult to identify the corresponding person in a multi-person scene,a set of hand and human body correlation matching algorithms are designed,which can effectively match the detection results.The experimental results show that the algorithm proposed in this paper can effectively detect the wearing of insulating gloves,the accuracy of the improved YOLOv3 model is increased by 29%,and the accuracy of the improved YOLOv3+ allocation algorithm model is increased by 33.43%.
Natural scene text detection is an important approach to obtain text information from scene images.Nevertheless,its techniques always face severe challenges from many factors,such as complex backgrounds,rich languages,diverse directions,and complex text line composition.Hence,it is one of the hot issues in the field of computer vision to study the natural scene text detection methods with high detection accuracy,strong versatility,and good robustness.Meanwhile,the methods based on the deep convolutional networks have become a mainstream.On introducing the background and challenges of text detection in natural scenes,this paper classifies the state-of-art methods into three categories according to different backbone networks,including the text detection methods based on VGG network,the methods based on ResNet network,as well as the ones based on FPN structure.The core ideas,technical advantages and shortcomings of various methods are expounded in detail.Subsequently,the public datasets are summarized for text detection in natural scenes,and an objective comparison is discussed among representative methods in terms of text detection performance.Finally,the difficulties of natural scene text detection are summarized.The future development trend is prospected of natural scene text detection based on the deep convolutional network as well.
The low-dimensional manifold model and its second-order extensions are newly born visual a priori in recent years.They have been successfully applied to gray-scale image restoration and achieved excellent results.However,the existing regularizations are based merely on the spatial dimensions of the image and the smoothness of the target structure,which misses the consideration of the energy concentration feature that may favor the better robustness of the devised model.In response to this problem,a gray-scale image restoration algorithm based on the weighted second-order regularization is proposed.Specifically,following the basic idea of the second-order low-dimensional reconstruction term,we present a reweighted cost function based on decomposition coefficients.Moreover,a new column weight is added to the original row weights,which would highlight the intrinsic energy concentration characteristics.By taking into account the use of both local basis and non-local basis,the proposed method could extract abundant features from image blocks.The final cost function can be decomposed into several sub-linear equations for optimization.A large number of numerical experiments have been carried out on several classic images.The restoration results show that the proposed method based on the re-weighted second-order regularization is better than most state-of-the-art algorithms in terms of both visual and numerical aspects.
Aiming at the common pedestrian occlusion and the multiscale problem of pedestrian targets in crowded scenes,a crowded pedestrian detection algorithm that integrates context and spatial information is proposed.The method first improves the feature pyramid network structure,and fully and effectively combines multi-scale features to cope with pedestrian scale changes by adding weighted fusion branches.Secondly,the method of fusing context and spatial feature information is adopted to obtain more potential pedestrian feature information to improve the feature missing problem caused by occlusion.At the same time,to make the model perform better on different datasets,a data augmentation method is introduced to simulate occlusion to improve the generalization ability of the model.To verify the effectiveness of the proposed method,the experimental evaluation is carried out on the CrowdHuman and CityPersons datasets.The experimental results fully demonstrate the effectiveness of the proposed algorithm.Compared with the baseline algorithm,the miss rate was reduced by 3.7%,which greatly improves the detection performance of the pedestrian detection algorithm in crowded scenes.
To solve the problem that existing models are too large,this paper proposes a Lightweight Multi-task U-shaped Network for Salient Object Detection.In order to maintain the performance while lightweight,Multi-task U-shape Network(MUN)modules and Down-sampling Parallel Module(DPM)are designed respectively.MUN modules use edge features to supplement the details of the shallow features in the module and emphasize the edge area,and uses skeleton features to further strengthen and modify the structure of image features.In the DPM,different receptive fields and global characteristics are obtained through down-sampling operation and dilated convolution operation,which are mainly used to supplement the structural information of the model and improve object location.Considering that features with large scale differences cannot be adapted to and fused with each other,parallel structures are used for adjacent fusion,and multiple ones are gradually combined into one.The proposed method achieves good performance on four SOD datasets,and further balances the model size and accuracy.The vadility and reliability of LMUNet are illustrated by comparison with other excellent models.
In order to improve the detection performance of multi-scale road objects,this paper proposes an improved algorithm of optimized localization confidence to solve the problem that the detection quality is represented unreasonably in NMS of detection algorithms.Firstly,a research framework is constructed based on RepPoints to study the sensitivity of localization confidence to multi-scale road targets.A mixed localization confidence was proposed according to the research results.Then,CIoU localization confidence is proposed to solve the problem that IoU cannot distinguish bounding boxes with the same overlapping degree.Finally,an improved algorithm of optimized localization confidence is obtained by combining these two confidence,which solves the problem of unreasonable representation of detection quality.The experimental results on Cityscapes show that both the mixed localization confidence and CIoU localization confidence are effective when used separately,and the accuracy is improved by 2.4% when used together.The detection accuracy of multi-scale objects is significantly improved,and the real-time performance is not decreased.Compared with the mainstream detection algorithms for road scene such as Cascad-RCNN,FCOS,etc,the proposed algorithm achieves the highest mAP,APM and APL.
Aiming at the target tracking algorithm based on spatiotemporal regularization correlation filtering,the spatial weight matrix introduced when solving boundary effects cannot adapt to target changes and the time regular term hyperparameters are fixed and cannot be adaptively updated.It is easy to introduce background noise and cause model drift.A target tracking algorithm with spatiotemporal regularization and adaptive correlation filtering based on sample reliability is proposed.First,the algorithm adjusts the spatial weight reference matrix adaptively through the spatial reliability of the image,and the adaptive spatial regularization term combined with the spatial weight reference matrix reduces the influence of the boundary effect to a certain extent.Then,the change degree of the response graphs of the two frames before and after is used to determine the hyperparameter reference value of the time regular term,so as to avoid the tracking drift problem caused by the sudden change of the model.Finally,this paper uses the Alternating Direction Multiplier Method(ADMM)to solve the objective function iteratively to ensure the efficiency of the algorithm.The algorithm in this paper has carried out related experiments on the public data of OTB2013 and OTB2015.A large number of experiments show that the algorithm in this paper can better deal with target tracking problems in complex environments,and its range accuracy and tracking success rate are better than other comparison algorithms.
Aiming at the problem that the ordinary convolutional neural network cannot make full use of the fine-grained feature and contour feature of sketch,and the classification effect is not ideal,a two-stages sketch classification method based on multi features is proposed in this paper.This method combines the classification results of the coarsegrained feature,fine-grained feature and contour feature,and is trained in two stages to extract more sufficient features.In the first training stage,the classification results of sketch coarse-grained feature are obtained directly by convolution neural network;the classification results of sketch fine-grained feature are obtained by adding bilinear pooling;the classification results of sketch contour feature are obtained by extracting the contour of the sketch.In the second training stage,a trainable classification result fusion module is proposed to fuse the classification results from the first training stage,and a regularization term is introduced to prevent the overfitting of it.The proposed method is compared with several latest methods on tuberlin dataset,and the experimental results show the effectiveness of the proposed method.
Stereo matching is one of the core steps in binocular stereo vision.In outdoor scenes,the traditional stereo matching algorithm is difficult to achieve high matching accuracy,and edge computing equipment must be both low-cost and high-efficiency.An improved semi-global stereo matching algorithm is proposed to solve these problems.First of all,a hierarchical iterative matching strategy is adopted as a whole,and reduce the computational complexity.Secondly,using an improved cost calculation method makes the initial cost more accurate.Finally,use parallel optimization to speed up the calculation.Experimental results show that the error matching rate of the proposed algorithm can reach 4.72% and 6.04% on KITTI2012 and KITTI2015 datasets.Use the image resolution of 1800×1500 to test the efficiency,the time efficiency of the fully optimized algorithm can be increased by 23.6 times under 256 disparities conditions.The algorithm can effectively improve the efficiency of stereo matching of edge CPU devices,and the error matching rate of the disparity map can reach the level of mainstream classic algorithms.
With the development of the IOT and mobile internet,network devices are increasing exponentially and the scale of network gets larger and larger,which brings new challenges to network management.SNMP can effectively manage largescale networks.However,due to the large number of managed network elements,SNMP message traffic increases the traffic overhead in the backbone network.To reduce the communication overhead brought by Trap messages generated by SNMP Agent to the backbone network,we propose an SDN-based Trap message aggregation method.This method uses the controller to issue flow table rules to forward Trap messages to the aggregation server for message aggregation,and then send them to the management station,thus effectively reducing the SNMP Trap traffic in the backbone network.In order to optimize the problems of high additional overhead,poor flexibility,and high load on management station for network management under the traditional network structure,this paper designs a network management architecture based on SDN.The architecture improves network management flexibility and alleviates the high load on management stations.Experimental results show that the SDN-based Trap message aggregation method reduces the management traffic in the backbone network by 41.328% compared with the traditional network management method in which send Trap messages directly to the management stations,significantly reducing the overhead of management traffic.
Accurately measuring per-flow cardinality in high-speed networks plays an essential role in traffic engineering,anomaly detection,and network security.However,on-chip memory space in network processor chips is extremely limited,which cannot meet the requirements of recording the network traffic information directly.It is necessary to record distinct flows in the same space and use compact data summary(called Sketch)to process and save flows information in real-time.This way of processing makes different flows intermixed together and leads to difficult-to-filter noise in per-flow cardinality estimation.We propose a learning-based enhanced per-flow cardinality estimation algorithm to address the common problems in the above research.The proposed algorithm improves the existing research on real-time packet processing and memory update rules,develops a more efficient encoding method to reduce noise as much as possible based on memory sharing.In addition,our algorithm uses deep learning models to learn latent patterns from per-flow encoded data to improve the performance of per-flow cardinality estimation.Experimental results show that our solution has higher accuracy and lower memory overhead than vHLL.
Rail transit is the backbone network of urban comprehensive passenger transport system,which is connected with the main passenger flow distribution points of the city.Its stable and efficient operation can promote the sustainable development of urban social economy.Firstly,this paper constructs the rail transit hypernetwork model based on hypergraph with the line as the node and the station as the hyperedge,and puts forward the functional evaluation index of lines.Secondly,a nonlinear load-capacity cascading failure model is constructed based on passenger flow,and a secondary load distribution mechanism is proposed.The effects of capacity control parameters and different attack strategies on network cascading failures are discussed.The network robustness is quantified by structural index of network efficiency and functional index of passenger flow loss rate.Finally,taking Shanghai rail transit network as an example,the validity and reliability of the model are verified.The results show that the capacity control parameters have the optimal value under different states,which can make the network maintain strong stability and cost less construction cost.Under the optimal parameters α=0.9 and β=0.95,the dynamic robustness of the network against the intentional attack is stronger than that of the random attack,indicating that the reasonable increase of node capacity can significantly improve the network robustness.In the static network structure,the betweeness is the most reliable to measure the critical degree of nodes,while under the background of cascading failures ,the functional evaluation index is the most reliable.
The limited node resources and impersonalized service are the major problems in the space-ground integrated networks.To solve these problems,this paper proposes an adaptive multi-constrained QoS(Quality of Services)routing algorithm.First,using the characteristics of SDN(Software-Defined Network)to obtain link QoS parameters in real time,a transmission cost model that considers link quality,remaining bandwidth and node load factors is proposed,and a multi-constrained QoS routing model with the path minimum cost as the optimization objective is established to maximize network throughput.Then,in order to meet the QoS requirements of different priority data flows,the Adam(Adaptive moment estimation)algorithm is used to solve the threshold adaptive problem of Multiconstraint model.Compared with the traditional manual setting method,it can better adapt to network changes,thereby providing more personalized service quality.Finally,an improved ant colony algorithm is used to solve the multi-constraint optimization problem.The candidate nodes are optimized through double taboo tables,and the problem of pheromone volatilization coefficient values is discussed by using the programmable characteristics of SDN,so as to find the optimal path.Compared with related solutions,this method not only meets the multi-priority QoS requirements of spatial information networks,but also has better performance in terms of algorithm convergence speed,network throughput,and load distribution index.
With the wide application of deep neural networks in various tasks of computer vision,deep learning has exposed vulnerability,and adversarial attack and adversarial sample generation algorithms have become hot research topics,and a series of progress have been made.One pixel attack achieved adversarial sample generation by modifying only one pixel in the image,which has more advantages over other adversarial attack algorithms in terms of concealment.However,because it uses differential evolution to poll the access model a lot to search for the target pixel,the attack is inefficient;At the same time,due to the easy fall into the local optimal solution during the search process,the attack effect is not good.This paper improves the above problems and proposes an attention based two-stage one pixel attack.This method reduces redundant computation by introducing attention mechanism to determine candidate perturbation regions,and to a certain extent avoids falling into local optimization,so as to achieve more efficient generation of one pixel adversarial sample.Through experiments on multiple deep convolutional models,it is proved that the adversarial samples generated by this scheme can achieve adversarial attacks with a high success rate,and have high portability,and maintain the inherent advantages of one pixel attack in terms of concealment.
FBMC is a communication system based on the prototype filter,but the imaginary interference of the prototype filter affects the correctness of the system channel estimation.Aiming at the problem that the imaginary interference of the FBMC prototype filter affects the channel estimation,this paper proposes a channel estimation algorithm based on the study of block pilots,optimized discrete pilots and channel coding.Paper proposed an optimized discrete pilots of FBMC-LSTM channel estimation algorithm.Simulation experiments are carried out on Vehicular A(200km/h),TDL-A(300ns),Pedestrian A(10km/h),WINNER and other channels with different modulation orders.The experimental results show that compared with other algorithms,the result has advantages over others in this paper.
TrustZone technology provides software with the trusted execution environment and rich execution environment isolated from each other by hardware security extensions.The interrupt isolation mechanism is a crucial isolation mechanism of TrustZone,which ensures that secure interrupts and non-secure interrupts are handled in the trusted execution environment and rich execution environment respectively.Its incorrect may cause secure interrupts to be handled by the rich execution environment,thus affecting the security of the trusted execution environment.This paper proposes a formal verification method for the interrupt isolation mechanism of the ARMv8 TrustZone architecture.We establish a formal model consisting of the critical software and hardware of the interrupt isolation mechanism in the theorem prover Isabelle/HOL.The model is a state transition system,including interrupt handlers,TrustZone Monitor,interrupt controller and other components.On the basis of proving the correctness of the model,this paper verifies the information flow security properties such as noninterference,nonleakage,and noninfluence by using the unwinding theorem.The results show that the TrustZone interrupt isolation mechanism satisfies information flow security properties,and there is no covert channel of information flow in the interrupt handling process.