Yuqing Yang, Jianghui Cai, Haifeng Yang and Xujun Zhao, Department of Computer Science and Technology, Taiyuan University of Science and Technology, Taiyuan, Shanxi, China
Aiming at the problem of low trajectory clustering accuracy caused by only focusing on the characteristics of Stop Points, a trajectory clustering method based on the feature analysis and modelling of both Stop and Move Points is given. Firstly, dif erent characteristics of the trajectory points are explore and evaluated by experiment, Point Density (PD) characteristics are selected to carry out trajectory clustering. Secondly, a trajectory clustering algorithm called PMS is proposed, it contains the following steps. (1) Obtaining a 1D approximate representation of trajectory using PD. (2) Establishing a Point Density Gaussian Model (PDGM) based on the 1D approximate representation. (3) Fitting the 1D approximate representation with PDGM until the convergence condition is satisfied. (4) Regarding the points that are not fitted by PDGM as Stop Points and Extracting them. Experimental results show that PMS can reduce error merging of adjacent clusters and find the trajectory clusters with dif erent shapes.
Trajectory Clustering, Stop Points Extraction, Move Points, Feature Analysis, Point Density Gaussian Model.
Jingci Li and Guangquan Lu, Department of Software Engineering, Guangxi Normal University, Guilin, China
Recently learning representations of graphs has got great success. While there are many graph neural networks for single task on graph structure data, one area that researchers have not explored deeply is multi-task learning on graph neural networks. Graph autoencoder and graph variational autoencoder are powerful frameworks for link prediction or graph generation and the learned representation is mostly used to downstream task node clustering instead of training simultaneously. Based on above cases, we propose a novel framework called multi-task adversarially regularized graph variational autoencoder (MTADGVAE) based on graph variational autoencoder and adversarial mechanism for semi-supervised node classification and unsupervised link prediction. We validate our framework on Cora, Pubmed and Citeseer datasets and the experimental results demonstrate competitive performance of our proposed framework compared with other advanced frameworks. We also develop three variants of MTADGVAE to get a robust embedding.
Graph Variational Autoencoder, Adversarial Model, Multi-task Learning.
Brian thoms and jason isaacs, CSU Channel Islands, USA
Classifier algorithms are a subfield of data mining and play an integral role in finding patterns and relationships within large datasets. In recent years, fake news detection has become a popular area of data mining for several important reasons, including its negative impact on decision-making and its virality within social networks. In the past, traditional fake news detection has relied primarily on information context, while modern approaches rely on auxiliary information to classify content. Modelling with machine learning and natural language processing can aid in distinguishing between fake and real news. In this research, we mine data from Reddit, the popular online discussion forum and social news aggregator, and measure machine learning classifiers in order to evaluate each algorithm’s accuracy in detecting fake news using only a minimal subset of data.
Machine Learning, Natural Language Processing, Reddit Social Network.
Eldane Vieira, Rita Maria Silva Julia and Elaine Ribeiro Faria, Federal University of UberlÃ¢ndia, Brazil
A great variety of real-world problems can be satisfactorily solved by automatic agents that use adaptive learning techniques conceived to deal with data stream scenarios. The success of such agents depends on their ability to detect changes that occur in the problem environment and on using such information to conveniently adapt their decision-making modules. Several detecting change methods have been proposed, with emphasis on the M-DBScan algorithm, which is the basis of this proposal. However, none of these methods is able to capture the meaning of the identified changes. Thus, the main contribution of this work is to propose an extended version of M-DBScan, called Semantic-MDBScan, with such ability. The proposed approach was validated through artificial data sets representing distinct scenarios. The experiments show that Semantic-MDBScan in fact achieves the intended goal.
Data Stream, Behavior change detection, Semantic assignment.
Liu Pei1, Chen Zipeng1, Zhao Longgang1, Wang Nannan2, Xia Fan2, Sun Peixia1, Chang Qian1, 1Research Institute of China Telecom Corporation Limited, Castle University, Beijing, China, 2Beijing University of Posts and Telecommunications, Beijing, China
With the expansion and development of telecom business, maintenance in telecom is facing more pressure and challenges, which needs digital transformation urgently. Work order is an important part of maintenance in telecom, mainly involving two types of scenarios: work order dispatching and work order interaction. Work order dispatching system faces the challenge of accurately identifying work orders and dispatching them to the corresponding departments in time, while work order interaction system needs to respond quickly according to the feedback from provincial companies and on-site operators to avoid the problem of delayed processing progress. Recently, artificial intelligence algorithms have shown superiority in solving the problems of these two scenarios. In this paper, we propose solutions for two scenarios. For work order dispatching, on the one hand, we propose a text classification algorithm based on work order pre-training model to identify work orders and dispatch them; on the other hand, we propose a Bilateral Branching Network in the field of text recognition(T-BBN) to build a work order dispatching system for the case of data showing long-tail distribution. For the work order interaction scenario, we perform model fusion based on four deep learning models, use contextual semantic modeling to achieve intelligent interaction with provincial companies or on-site feedback content, and innovatively incorporate shallow features according to business reality. The methods proposed in this paper all prove their effectiveness through experiments and can better accomplish the work order tasks in the actual telecom business.
Intelligent Response, Maintenance in Telecom, Work Order Dispatching, Work Order Interaction, Work Order Pre-training Model, T-BBN.
Shahzad Ashraf, College of Internet of Things Engineering (IoT), Hohai University China
Over time, an exorbitant data quantity is generating which indeed requires a shrewd technique for handling such a big database to smoothen the data storage and disseminating process. Storing and exploiting such big data quantities require enough capable systems with a proactive mechanism to meet the technological challenges too. The available traditional Distributed File System (DFS) becomes inevitable while handling the dynamic variations and requires undefined settling time. Therefore, to address such huge data handling challenges, a proactive grid base data management approach is proposed which arranges the huge data into various tiny chunks called grids and makes the placement according to the currently available slots. The data durability and computation speed have been aligned by designing data disseminating and data eligibility replacement algorithms. This approach scrumptiously enhances the durability of data accessing and writing speed. The performance has been tested through numerous grid datasets and therefore, chunks have been analyzed through various iterations by fixing the initial chunks statistics, then making a predefined chunk suggestion and then relocating the chunks after the substantial iterations and found that chunks are in an optimal node from the first iteration of replacement which is more than 21% of working clusters as compared to the traditional approach.
Data mining, exorbitant, nodes, disseminating process, data chunks, slots, grid, data storage .
Bamrung Tausiesakul, Department of Electrical Engineering, Faculty of Engineering Srinakharinwirot University, 63 Mu 7, Rangsit-Nakhonnayok Road, Canal 16, Ongkharak, Nakhon Nayok 26120, Thailand
Several methods for signal acquisition in compressed sensing were proposed in the past. Iterative hard thresholding (IHT) algorithm and its variants can be considered as a kind of those methods based on gradient descent. Unfortunately, when the objective function is highly nonlinear, the steepest descent typically suffers from many local minima. One way to facilitate the nonlinear search to be close to the global solution is the manipulation of search step size. In this work, a numerical search is used to find an optimal step size in the sense of minimal signal recovery error for the normalized IHT algorithm. The proposed step size is compared to a randomly chosen fixed one as in the former works. Numerical examples illustrate that the optimal parameters that form up a good step size can provide lower normalized root mean square error of the acquired signal than the arbitrary chosen step size method. The performance improvement is obvious for a large number of nonzero elements hidden in the sparse signal.
Compressive sensing, iterative hard thresholding, sparsity pattern.