Yuqing Yang, Jianghui Cai, Haifeng Yang and Xujun Zhao, Department of Computer Science and Technology, Taiyuan University of Science and Technology, Taiyuan, Shanxi, China
Aiming at the problem of low trajectory clustering accuracy caused by only focusing on the characteristics of Stop Points, a trajectory clustering method based on the feature analysis and modelling of both Stop and Move Points is given. Firstly, dif erent characteristics of the trajectory points are explore and evaluated by experiment, Point Density (PD) characteristics are selected to carry out trajectory clustering. Secondly, a trajectory clustering algorithm called PMS is proposed, it contains the following steps. (1) Obtaining a 1D approximate representation of trajectory using PD. (2) Establishing a Point Density Gaussian Model (PDGM) based on the 1D approximate representation. (3) Fitting the 1D approximate representation with PDGM until the convergence condition is satisfied. (4) Regarding the points that are not fitted by PDGM as Stop Points and Extracting them. Experimental results show that PMS can reduce error merging of adjacent clusters and find the trajectory clusters with dif erent shapes.
Trajectory Clustering, Stop Points Extraction, Move Points, Feature Analysis, Point Density Gaussian Model.
Jingci Li and Guangquan Lu, Department of Software Engineering, Guangxi Normal University, Guilin, China
Recently learning representations of graphs has got great success. While there are many graph neural networks for single task on graph structure data, one area that researchers have not explored deeply is multi-task learning on graph neural networks. Graph autoencoder and graph variational autoencoder are powerful frameworks for link prediction or graph generation and the learned representation is mostly used to downstream task node clustering instead of training simultaneously. Based on above cases, we propose a novel framework called multi-task adversarially regularized graph variational autoencoder (MTADGVAE) based on graph variational autoencoder and adversarial mechanism for semi-supervised node classification and unsupervised link prediction. We validate our framework on Cora, Pubmed and Citeseer datasets and the experimental results demonstrate competitive performance of our proposed framework compared with other advanced frameworks. We also develop three variants of MTADGVAE to get a robust embedding.
Graph Variational Autoencoder, Adversarial Model, Multi-task Learning.
Peilin Li1 and Yu Sun2, 1Coding Minds, 920 Roosevelt Suite 200, Irvine, 92620, 2California State Polytechnic University, Pomona, CA, 91768
As society evolves, everyones workload is increasing. Being able to manage their time well and ef iciently is vital . What tools can we utilize to help us develop better time management skills while working towards a goal ? This paper develops an application to help people achieve better time management, and support them to achieve their dreams. We applied our application to some high school students and conducted a qualitative evaluation of the approach. The results show that this software will solve the problem of defective time management. It will help the user build up good time management by setting up a timer for every event . They will realize whether they used too much time on one task or not, hence improving their time management. In addition, it also stops the user from procrastinating. Since there is a timer, the user will feel pressure whenever they start a task. Consequently, they will try to complete the mission as soon as possible to relax. And last but not least, our application will help the user to achieve their dream. As the user cultivates good time management, they will use their time more ef ectively, hence achieve their dream .
Machine Learning, Web scraping, Pytho.
Angelina Tzacheva1, Akshaya Easwaran2, 1Department of Computer Science, University of North Carolina at Charlotte, Charlotte, North Carolina – 28223, USA, 2Department of Information Technology, University of North Carolina at Charlotte, Charlotte, North Carolina - 28223, USA
Everyday several tons of data are generated by Social media, Education sectors. Mining this data can provide a lot of meaningful insights on how to improve user experience in social media and how to improve student learning experience. Action Rule Mining is a method that can extract such actionable patterns from various datasets. Action Rules provide actionable suggestions on how to change the state of an object from an existing state to a desired state for the benefit of the user. There are two major frameworks in the literature of Action Rule mining namely Rule-Based method where the extraction of Action Rules is dependent on the pre-processing step of classification rule discovery and Object-Based method where it extracts the Action Rules directly from the database without the use of classification rules. Hybrid Action rule mining approach combines both these frameworks and generates complete set of Action Rules. The hybrid approach shows significant improvement in terms computational performance over the Rule-Based and Object-Based approach. In this work we propose a novel Modified Hybrid Action rule approach which further improves the computational performance with big datasets.
Actionable Patterns, Action Rules, Emotion Detection, Data Mining, Rule-Based, Object-Based.
Naciye Celebi1, Qingzhong Liu1, Muhammed Karatoprak2, 1Department of Computer Science, Sam Houston State University, Huntsville, TX, USA, 2Department LLM in US Law, The University. Of Houston, Houston, TX, USA
Recently, image manipulation has achieved rapid growth due to the advancement of sophisticated image editing tools. A recent surge of generated fake imagery and videos using neural networks is DeepFake. DeepFake algorithms can create fake images and videos that humans cannot distinguish them from authentic ones. (GANs) have been extensively used for creating realistic images without accessing the original images. Therefore, it is become essential to detect fake videos to avoid the spread of false information. This paper presents a survey of methods used to detect DeepFakes and, datasets are available for detection of DeepFakes in the literature to date. We present extensive discussions and research trends related to DeepFake technologies.
DeepFake, Digital Forensics, Law.
Sarvenaz Ghafourian and Ramin Sharifi, Department of Electrical and Computer Engineering, University of Victoria, Victoria, Canada
The wide usage of computer vision in many perspectives has been attracted in the recent years.One of the areas of computer vision that has been studied is facial emotion recognition, which plays a crucial role in the interpersonal communication. This paper tackles the problem of intraclass variances in the face images of emotion recognition datasets. We test the system on an augmented dataset including ck+, emotic, and kdef dataset samples.
Emotion Recognition, Residual Network, VGG.
Rubén Adrián Cuba Lajo and Manuel Eduardo Loaiza Fernández
Particle packings are used to simulate granular matter, which has various uses in industry. The most outstanding characteristics of these are their density and their construction time, the density refers to the percentage of the space of the object filled with particles, this is also known as compaction or solid fraction. Particle packing seeks to be as dense as possible, work on any object, and have a low build time. Currently there are proposals that have significantly reduced the construction time of a packing and have also managed to increase the density of these, however, they have certain restrictions, such as working on a single type of object and being widely affected by the characteristics of the object. The objective of this work is to present the improvement of a parallel sphere packing for arbitrary domains. The packing to improve was directly affected in time by the number of triangles in the mesh of object. This enhancement focuses on creating a parallel data structure to reduce build time. The proposed method reduces execution time with a high number of triangles, but it takes up a significant amount of memory for the data structure. However, to obtain high densities,that is, densities between 60% and 70%, the sphere packing construction does not overwhelm the memory.
Sphere Packing, Geometric Algorithm, Parallel.
Ali Ahmed, Information Technology department, Faculty of Computers and Information, Menoufia University, Egypt
Recently, the process of fish species classification has become one of the most challenge problems addressed by researchers. In this research, an efficient scheme to classify fish images based on robust features extraction from shape signature is proposed. First, the boundary points of fish images are fitted in a continuous contour using radial basis function neural network (RBFNN) to calculate the centroid of the fish image. After that, robust features from the shape signature are extracted. These features are good to represent fish shape because it can distinguish the characteristics of each class as well as are relatively robust to the scale and rotation change. Then, two of the most commonly used classification techniques;RBFNN and support vector machine (SVM) were evaluated and compared against the fish image shape features. Our system has been applied on well-known fish dataset that acquired from a live video dataset grouped by 23 clusters and each cluster is presented by specific species. An overall accuracy was obtained using SVM and RBFNN were 90.41% and 98.04% consequently.
Shape signature, Radial Basis Function, Support vector machine, Fish classification.
Loc Truong and Zilong Ye, California State University, Los Angeles, USA
As the techniques of autonomous driving and connected vehicles continue to advance, it is promising to offer a rich set of information (e.g., news) and entertainment (e.g., video) service for the drivers and passengers on the road. In this work, we propose an efficient and secure on-road video streaming framework over fog computing. The video content is divided into a number of small-sized clips, and then be pushed from cloud to stationary fog nodes in advance. Upon the client’s request, the stationary fog will authenticate the user’s identity and push the video clips to a number of mobile fog nodes along the route of the vehicle, so that the video can be delivered smoothly for the drivers and passenger on-road. We prototype this framework and run experiments to verify its performance. The experiment results show that fog computing based on-road video streaming can achieve 28% lower downloading time (in average), compared to the traditional cloud based video streaming for on-road vehicles.
Fog computing, mobile fog, video streaming, on-road, vehicle.
Ning Luo and Yue Xiong, Intel Asia-Pacific Research & Development Ltd
Modern Software has growing complexity. It puts higher quality requirements on the continuous software development. Software quality is beyond pass rate and can be reflected by more perspectives in the whole development cycle. Engineering team needs to embrace the challenges and we believe the blockchain (BC) technologies can help the transformation. In this paper, citing one ultra-large-scale platform software - Intel Media Driver as an example, it demonstrates the experience on leveraging one novel blockchain based software quality cryptocurrency system – QCoin to drive the mindset change for software quality beyond pass rate. It is expected that the experiences shared can help more practitioners to apply similar methodology in their continuous software development.
Software Quality, Project Execution Predictability, Block Chain, Cryptocurrency, Proof of stake, Smart Contract, Continuous software development.
Ayeshmantha Wijegunathileke and Achala Aponso, Department of Computing, Informatics Institute of Technology, Sri Lanka
Machine Learning, a subtype of AI, enables computers to mimic human behaviour without explicit programming. Machine learning models arent used very often in diagnostic imaging because there isnt enough knowledge and resources to do so. Hence, this study aims to apply automated machine learning to the diagnosis of medical images to make machine learning more accessible to non-experts. In this study, a dataset containing 2313 images each of covid-19, pneumonia and normal chest x-rays were selected and divided into testing, training, and validation datasets. The AutoGluon library was used to train and produce a model that would classify an input image and infer the probable diagnosis from the diseases it was trained upon. This study can prove that applying hyperparameter optimization and neural architecture search is able to produce high accuracy models for medical image diagnosis.
Automated Machine Learning, Hyperparameter Tuning, Neural Architecture Search, Medical Imaging.
Brian thoms and jason isaacs, CSU Channel Islands, USA
Classifier algorithms are a subfield of data mining and play an integral role in finding patterns and relationships within large datasets. In recent years, fake news detection has become a popular area of data mining for several important reasons, including its negative impact on decision-making and its virality within social networks. In the past, traditional fake news detection has relied primarily on information context, while modern approaches rely on auxiliary information to classify content. Modelling with machine learning and natural language processing can aid in distinguishing between fake and real news. In this research, we mine data from Reddit, the popular online discussion forum and social news aggregator, and measure machine learning classifiers in order to evaluate each algorithm’s accuracy in detecting fake news using only a minimal subset of data.
Machine Learning, Natural Language Processing, Reddit Social Network.
Eldane Vieira, Rita Maria Silva Julia and Elaine Ribeiro Faria, Federal University of UberlÃ¢ndia, Brazil
A great variety of real-world problems can be satisfactorily solved by automatic agents that use adaptive learning techniques conceived to deal with data stream scenarios. The success of such agents depends on their ability to detect changes that occur in the problem environment and on using such information to conveniently adapt their decision-making modules. Several detecting change methods have been proposed, with emphasis on the M-DBScan algorithm, which is the basis of this proposal. However, none of these methods is able to capture the meaning of the identified changes. Thus, the main contribution of this work is to propose an extended version of M-DBScan, called Semantic-MDBScan, with such ability. The proposed approach was validated through artificial data sets representing distinct scenarios. The experiments show that Semantic-MDBScan in fact achieves the intended goal.
Data Stream, Behavior change detection, Semantic assignment.
Qiantai Chen1 and Yu Sun2, 1Department of Computer Science University of California - Irvine, CA 92697, 2Computer Science Department, California State Polytechnic University, Pomona, CA 91768
Online media has become a mainstream of current society. With the rapid development of video data, how to acquire desired information from certain provided media is an urgent problem nowadays. The focus of this paper is to analyse a sufficient algorithm to address the issue of dynamic complex movie classification. This paper briefly demonstrates three major methods to acquire data and information from movies, including image classification, object detection, and audio classification. Its purpose is to allow the computer to analyse the content inside of each movie and understand video content. Movie classification has high research and application value. By implementing described methods, finding the most efficient methods to classify movies is the purpose of this paper. It is foreseeable that certain methods may have advantages over others when the clips are more special than others in some way, such as the audio has several significant peaks and the video has more content than others. This research aims to find a middle ground between accuracy and efficiency to optimize the outcome.
Convolutional Neural Network, Image Classification, Object Detection, Audio Classification, Movie Classifier.
Ekrem Duman, Department of Industrial Engineering, Ozyegin University, Istanbul, Turkey
In a bank the main function of the internal control department is to control the banking operations to see if they are performed compatible with the regulations and bank business policies. To do this, they pick up a number of operations that are selected by some rule or randomly and, investigate these operations according to some predetermined check points to see if they are compatible. If they find any discrepancies, they inform the corresponding department (usually bank branches) and ask them for a correction or explanation. In this study, we aim at helping the internal controllers to determine the most likely operations that may be problematic so that if they investigate them, they can find a higher number of incompatibilities. We picked up the credit issuing department which makes the controllers busiest the most. We show that the developed predictive models can improve their work efficiency a lot.
Predictive modeling, banking, internal control.
J´ulia Colleoni Couto, Olimar Teixeira Borges, and Duncan Dubugras Ruiz, School of Technology, PUCRS University, Brazil
When we work in a data lake, data integration is not easy, mainly because the data is usually stored in raw format. Manually performing data integration is a time-consuming task that requires the supervision of a specialist, which can make mistakes or not be able to see the optimal point for data integration among two or more datasets. This paper presents a model to perform heterogeneous in-memory data integration in a Hadoop-based data lake based on a top-k set similarity approach. Our main contribution is the process of ingesting, storing, processing, integrating, and visualizing the data integration points. The algorithm for data integration is based on the Overlap coefficient since it presented better results when compared with the set similarity metrics Jaccard, Sørensen-Dice, and the Tversky index. We tested our model applying it on eight bioinformatics-domain datasets. Our model presents better results when compared to an analysis of a specialist, and we expect our model can be reused for other domains of datasets.
Data integration, Data lake, Apache Hadoop, Bioinformatics.
Fernando Buarque and Alvaro Farias Pinheiro, University of Pernambuco, Brazil
This work consists in the application of supervised Machine Learning techniques to identify which types of active debts are appropriate for the method of collection called protest, one of the means of collection used by the Attorney Generals Office of the State of Pernambuco. For research, the following techniques were applied, Neural Network (NN), Logistic Regression (LR) and Support Vector Machine (SVM). The NN model obtained more satisfactory results among the other classification techniques, achieving better values in the following metrics: Accuracy (AC), F-Measure (F1), Precision (PR) and Recall (RC) with indexes above 97% in the evaluation with these metrics. The results showed that the construction of an Artificial Intelligence/Machine Learning model to choose which debts can succeed in the collection process via protest can bring benefits to the government of Pernambuco increasing its efficiency and effectiveness.
Data Mining, Artificial Intelligence, Machine Learning & Public Debt Collection.
Liu Pei1, Chen Zipeng1, Zhao Longgang1, Wang Nannan2, Xia Fan2, Sun Peixia1, Chang Qian1, 1Research Institute of China Telecom Corporation Limited, Castle University, Beijing, China, 2Beijing University of Posts and Telecommunications, Beijing, China
With the expansion and development of telecom business, maintenance in telecom is facing more pressure and challenges, which needs digital transformation urgently. Work order is an important part of maintenance in telecom, mainly involving two types of scenarios: work order dispatching and work order interaction. Work order dispatching system faces the challenge of accurately identifying work orders and dispatching them to the corresponding departments in time, while work order interaction system needs to respond quickly according to the feedback from provincial companies and on-site operators to avoid the problem of delayed processing progress. Recently, artificial intelligence algorithms have shown superiority in solving the problems of these two scenarios. In this paper, we propose solutions for two scenarios. For work order dispatching, on the one hand, we propose a text classification algorithm based on work order pre-training model to identify work orders and dispatch them; on the other hand, we propose a Bilateral Branching Network in the field of text recognition(T-BBN) to build a work order dispatching system for the case of data showing long-tail distribution. For the work order interaction scenario, we perform model fusion based on four deep learning models, use contextual semantic modeling to achieve intelligent interaction with provincial companies or on-site feedback content, and innovatively incorporate shallow features according to business reality. The methods proposed in this paper all prove their effectiveness through experiments and can better accomplish the work order tasks in the actual telecom business.
Intelligent Response, Maintenance in Telecom, Work Order Dispatching, Work Order Interaction, Work Order Pre-training Model, T-BBN.
Shikha Verma1, Arun Kumar Gautam2, Santushti Gandhi3 and Aman Goyal4, 1Assistant Professor, Department of Computer Science, Ram Lal Anand College, University of Delhi, India, 2Assistant Professor, Department of Computer Science, Ram Lal Anand College, University of Delhi, India, 3Student, Department of Computer Science, Ram Lal Anand College, University of Delhi, India, 4Student, Department of Computer Science, Ram Lal Anand College, University of Delhi, India
This paper proposes sentence level sentiment analysis to classify whether a movie review is positive or negative by using supervised machine learning algorithm of Support Vector Machine. The experiment is performed on the IMDB dataset which showed an improvement in the accuracy when using SVM approach as compared to the other supervised machine learning algorithms. This paper shows training data using SVM which achieves an accuracy of 98.89 % which is an improvement in accuracy as compared to the other approaches implemented so far.
Text Classification, Sentiment Analysis; Text Processing, Supervised Learning; Supervised Learning; Learning, SVM, datasets, sentiment analysis, Support Vector Machine, Supervised Machine Learning, TextProcessing, Text Classification, Confusion Matrix.
Domenic Rosati, scite.ai Brooklyn, New York, USA
Recent work has demonstrated that machine learning models allow us to compare languages by showing how hard each language might be to learn under specific tasks. Following this line of investigation, we investigate what makes a language “hard to pronounce” by modeling the task of grapheme-to-phoneme (g2p) transliteration. By training a character-level transformer model on this task across 22 languages and measuring the model’s proficiency against its grapheme and phoneme inventories, we show that certain characteristics emerge that separate easier and harder languages with respect to learning to pronounce. Namely that the complexity of a languages pronunciation from its orthography is due to how expressive or simple it’s grapheme-to-phoneme mapping is. Further discussion illustrates how future studies should consider relative data sparsity per language in order to design more fair cross lingual comparison tasks.
Phonology, Orthography, Linguistic Complexity, Grapheme-to-Phoneme, Transliteration.
Alexandra Uma and Dmitry Sityaev, Connex One, 8 Tib Lane, Manchester, M2 4JB, United Kingdom
This paper provides results of the evaluation of some techniques used for text summarization with a goal of producing call summaries for contact centre solutions. We specifically focus on extractive summarization methods as they do not require any labelled data and are fairly quick and easy to implement for production use. TopicSum and Lead-N were found to be outperforming other summarization methods, whilst BERTSum received quite low scores in both, subjective and objective evaluations. The results demonstrate that even such simple heuristics-based methods like Lead-N can produce meaningful and useful summaries of call centre dialogues.
Information Retrieval, Text Summarization, Extractive Summarization, Call Centre Dialogues.
Fadya Abbas, Department of Computer Engineering, Nahrain University, Canada
Dealing with vast amounts of textual data requires the use of an efficient system. However, the following reasons; the highly ambiguous and complex nature of many prosodic phrasing also enough dataset suitable for system training is always limited, cause big challenges for training the NLP systems. This proposed conceptual framework aims to provide an understanding and familiarity with the elements of modern deep learning networks for NLP use. In this design, the encoder uses Bidirectional Long Short-Term Memory deep network layers, to encode the test sequences into more context-sensitive representations. In addition, the attention mechanism is used to generate a context vector that is computed from various alignment scores at various word positions, so it can focus on a small words subset. Hence, the attention mechanism improved the model data efficiency, and the model performance is validated using an example of data sets that show promise for a real-life application.
NLP, Deep Learning, LSTM, Attention Mechanism, Data Efficiency.
Hessa Abdulrahman Alawwad, College of Computer Sience and Information, Imam Mohammad Ibn Saud Islamic University, (IMSIU) Riyadh, Saudi Arabia
This paper aims to fill the gap in major Islamic topics by developing an ontology for the Book of Purification in Islam. Many trusted books start with the Book of Purification as it is the key to prayer (Second Pillar after Shahadah, the profession of faith) and required in Islamic duties like performing Umrah and Hajj. The strategy for developing the ontology included six steps: (1) domain identification, (2) knowledge acquisition, (3) conceptualization, (4) classification, (5) integration and implementation, and (6) ontology generation. Examples of the built tables and classifications are included in this paper. Since the technical implementing of the proposed ontology is not within this paper’s objectives, focus is given to the design and analysis phases and we started with an initial implementation to represent the steps in our strategy in this paper. This paper’s focus is to make sure that this ontology or knowledge representation on the Book of Purification in Islam satisfy reusability, where the main attributes, concepts, and their relationships are defined and encoded. This formal encoding will be available for sharing and reusing.
Haider Khalid and Vincent Wade, School of Computer Science and Statistics, Trinity College Dublin, the University of Dublin, Ireland
Dialogue topic modelling, which aims to divide unsupervised textual dialogues into different dialogue segments associated with a particular topic, is necessary for the topic modelling process. However, existing popular unsupervised topic modelling techniques are widely used in textual data such as textual documents, blogs, tweets etc., and rarely on dialogue corpus. These existing techniques use similarity features in sentences and assume that consecutive sentences in the same section/paragraph are more similar than sentences in different sections to determine topical coherence. In this paper, the proposed experimental approach is based not only on similarity features among dialogue utterances but also based on the BiLSTM sequence processing model by adopting bags of topics (BoT) to formulate dialogue topic segmentation. Furthermore, it uses contextual information as an additional feature to divide dialogues into different segments for a particular topic. First, we use the existing topics to perform noisy labelling for each dialogue utterance. Then, input the noisy labelled data for pre-training BiLSTM network to refine noisy labels and predict topics for dialogue utterances. Finally, using the contextual information the refined labels for each dialogue utterances used to segment the dialogues. The experiment is performed on the switchboard dialogue corpus, which is publicly available. To the test the proposed model, we use the test data which is manually annotated and done the comparison analysis with the existing techniques to evaluate the performance. Mean absolute error, window difference, and F-measure are used as evaluation metrics because of the widespread use in comparable systems. The experimental results show that the proposed approach substantially improves all three metrics over existing methods.
Dialogue topic modeling, Bag of topics, BiLSTM model, Topic segmentation, Dialogue system.
Youssef FAKIR* and Rachid ELAYACHI, Information Processing and Decision Support Laboratory, Department of computer Science, Faculty of Science and Technology PO Box.523, Beni Mellal, Morocco
The extraction of association rules is generally done in two steps: the extraction of frequent itemsets and then the generation of association rules from these itemsets. When the amount of data is high, traditional algorithms, such APRIORI consumes much time. This fact can be solved with the use of parallel algorithms based on spark. This paper’s aim is a comparative study between algorithms based on "Spark frame work". The algorithms presented in this paper are DFPS, RAPRIORI, PFP and YAFIM. Experimental results showed that the PFP algorithm has better efficiently than the others algorithms.
Spark, Data mining, Frequent itemset, Big Data.
Shahzad Ashraf, College of Internet of Things Engineering (IoT), Hohai University China
Over time, an exorbitant data quantity is generating which indeed requires a shrewd technique for handling such a big database to smoothen the data storage and disseminating process. Storing and exploiting such big data quantities require enough capable systems with a proactive mechanism to meet the technological challenges too. The available traditional Distributed File System (DFS) becomes inevitable while handling the dynamic variations and requires undefined settling time. Therefore, to address such huge data handling challenges, a proactive grid base data management approach is proposed which arranges the huge data into various tiny chunks called grids and makes the placement according to the currently available slots. The data durability and computation speed have been aligned by designing data disseminating and data eligibility replacement algorithms. This approach scrumptiously enhances the durability of data accessing and writing speed. The performance has been tested through numerous grid datasets and therefore, chunks have been analyzed through various iterations by fixing the initial chunks statistics, then making a predefined chunk suggestion and then relocating the chunks after the substantial iterations and found that chunks are in an optimal node from the first iteration of replacement which is more than 21% of working clusters as compared to the traditional approach.
Data mining, exorbitant, nodes, disseminating process, data chunks, slots, grid, data storage .
Sanjana Narvekar, Mayur Shirodkar, Purva Vaingankar, Tanvi Raut, K. M. Chaman Kumar and Shailendra Aswale, Computer Engineering SRIEIT, Goa University, India
As we are progressing as creating world with an extraordinary speed, we are additionally getting larger number of malignant lung cancer growth instances of which one is cellular breakdown in the lungs which has destroyed existences of various individuals. Cellular breakdown in the lungs is identified by Computed Tomography (CT) scan, X-ray, Positron emission Tomography (PET) and Magnetic Resonance Imaging (MRI). X-ray and PET sweeps are costly when contrasted with CT and X-ray. CT pictures are liked over the remainder of the imaging procedures. The vast majority of the specialists decide on CT sweeps or X-ray. To forestall such irregularity we can break down CT sweeps and X-ray pictures with more prominent profundities by utilizing picture handling procedures on the picture. CT outputs and X-rays are most usually utilized and reasonable radiological sweeps that can be performed to identify the cellular breakdown in the lungs in individuals. This studies paper incorporates various innovations utilized in recognizing cellular breakdown in the lungs concerning numerous boundaries like classifiers, improvement procedures, highlight extraction methods, foster this innovation as far as we could possibly know and to most noteworthy exactness to make CAD a fruitful instrument in the possession of specialists.
Lung Cancer, Computed Tomography (CT), X-ray, Image Processing, Image Enhancement, Image Enhancement, Feature Extraction,3D imaging, Small Cell Lung Cancer, Non-small Cell Lung Cancer.
Leon He1, Ang Li2, 1La Canada High School, 4463 Oak Grove Drive, Irvine, CA 92604, 2California State University, Long Beach, 1250 Bellflower Blvd, Long Beach, CA 90840
Sleep is a crucial part of a person’s daily routine . However, oversleeping is often a hindrance to many people’s daily life. This paper develops an application to prevent people from oversleeping or falling back to sleep after snoozing the alarm. We applied our application to fellow students and conducted a qualitative evaluation of the approach. The results show that the application improves the chances of waking up to a significant degree.
Machine Learning, Recommendation System, Data Mining.
Michael Li1 and Yu Sun2, 1Northwood High School, 4515 Portola Parkway, Irvine, CA 92620, 2California State Polytechnic University, Pomona, CA, 91768, Irvine, CA 92620
In recent times with the pandemic, many people have been finding exercise as an outlet. However, this situation has made it difficult for people to connect with one another and share their progress with friends and family. This paper designs an application to utilize big data, a social media network, and exercise tracking . The program aims to help people connect with others to support one another in their fitness journey. Through various experiments we demonstrated that the application was effective in connecting users with each other and overall improving their fitness experience. Additionally, people of all experience levels in fitness were generally satisfied with the performance of FitConnect, with those of higher experience being less satisfied than those with lesser experience. This application will facilitate getting into fitness through positive means for any person who wants to pursue a healthy lifestyle, whether in the walls of their house, a swimming pool, or a gym .
Big Data, Social Community, Exercise Tracking.
Ning Luo and Linlin Zhang, Visual Computing Group, Intel Asia-Pacific Research & Development Ltd, Shanghai, China
Unit level test has been widely recognized as an important approach to improve software quality, as it can expose bugs earlier in the development phase. However, manual unit level test development is often tedious and insufficient. Also, it is hard for developers to precisely identify the most error prone code block deserving the best test coverage by themselves. In this paper, we present the automatic Unit level test framework we used for intel media driver development. It can help us identify the most critical code block, provide the test coverage recommendation, and automatically generate >80% ULT code (~400K Lines of test code) as well as ~35% test cases (~7K test cases) for intel media driver. It helps us to greatly shrink the average ULT development effort from ~24 Man hours to ~3 Man hours per 1000 Lines of driver source code.
Unit level test, error prone logic, test coverage inference, automatic ULT generation, fuzzing, condition/decision coverage.
Bamrung Tausiesakul, Department of Electrical Engineering, Faculty of Engineering Srinakharinwirot University, 63 Mu 7, Rangsit-Nakhonnayok Road, Canal 16, Ongkharak, Nakhon Nayok 26120, Thailand
Several methods for signal acquisition in compressed sensing were proposed in the past. Iterative hard thresholding (IHT) algorithm and its variants can be considered as a kind of those methods based on gradient descent. Unfortunately, when the objective function is highly nonlinear, the steepest descent typically suffers from many local minima. One way to facilitate the nonlinear search to be close to the global solution is the manipulation of search step size. In this work, a numerical search is used to find an optimal step size in the sense of minimal signal recovery error for the normalized IHT algorithm. The proposed step size is compared to a randomly chosen fixed one as in the former works. Numerical examples illustrate that the optimal parameters that form up a good step size can provide lower normalized root mean square error of the acquired signal than the arbitrary chosen step size method. The performance improvement is obvious for a large number of nonzero elements hidden in the sparse signal.
Compressive sensing, iterative hard thresholding, sparsity pattern.
Vilca Vargas Jose R, Quio Añauro Paúl A and Loaiza Fernández Manuel E, Universidad Católica San Pablo, Arequipa Perú
Optical technology is widely used on tracking systems because of its high availability, low costs and because the tracking object is not overposed by cables or other components . That is why the accuracy, related to a precise camera calibration, of optical devices is crucial. In the proposed method, we seek to perform a camera calibration using a plain pattern for the initial calibration. Later, we will use the projective invariant properties to obtain invariant projective patterns defined inside the calibration pattern in the data acquisition stage. After that, we will use a hybrid model of photogrammetric calibration and auto-calibration with the previously obtained data. Lately, we will generate a parameter optimization of the camera related to the processed information.
Camera calibration, Pattern recognition, Optimization & Frontoparallel projection.
Sukkrit Sharma, Bidisha Mukherjea and Dr.C. Malathy, Department of Computer Science and Engineering, SRM Institute of Science and Technology, Kattankulathur, India
Driving based on sight is difficult to implement because of the lack of historic data. An autonomous model needs to learn from its environment in order to know how to act. We simplify this problem by first, training a supervised agent that learns based on ground-level knowledge; also known as the expert as it has a birds eye view of the world. The information learnt by the expert agent is provided to the semi supervised agent in the second stage and acts as the supervision teacher model. The semisupervised agent is a perception-based agent which does not observe the ground truth but makes decisions from vision and it learns by imitating the supervised agent. All the experiments and training were performed in a simulated environment called the CARLA simulator. The final testing was done on the NoCrash benchmark of the CARLA Simulator and proves to achieve substantially good results.
Imitation Learning, Semi-Supervised Agent, CARLA simulator, Self-Driving Car.
Ishan Kar, Auradine, Inc. Campbell, USA
The convolution of two rational transfer functions is also rational, but a formula for the convolution has never been derived. This project introduces a formula for the convolution of two rational functions in the frequency domain by two new methods. The first method involves a partial fraction expansion of the rational transfer functions where a complicated rational transfer function is split up into simple partial fractions. The problem gets reduced to the sum of the convolution of the partial fractions of the two functions, each of which can be solved by a new formula. Adding the results of the convolution of partial fractions we get to the solution. Since, calculation of the roots of a high-order polynomial can be very time-consuming, we also demonstrate new methods for performing the convolution without calculating these roots or undergoing partial fraction expansion.
Z-Transform, Laplace transform, Frequency domain, Convolution, Fourier transform.
Harsh Kapadia1, Yash Richhariya1, Paresh Patel2, Jignesh Patel1, 1Department of Electronics and Instrumentation Engineering, Institute of Technology, Nirma University, Ahmedabad, India, 2Department of Civil Engineering, Institute of Technology, Nirma University, Ahmedabad, India
Structures like bridges, buildings, roads, etc., require inspection at regular intervals to estimate their deterioration and to prevent further accidents or damage to human life and other losses. Vision basedautomatic methods to detect or identify defects in such structures have been carried out by researchers. Limited field of view of good quality cameras, the larger size of structures, varying illumination andimage acquisition conditions, etc. are challenging factors in the success of vision-based defect detectionmethods. In this paper, Convolutional Neural Network (CNN) based method has been presented toovercome the challenges of variations in the images and to improve the accuracy and robustness of crack or defect detection in concrete structures. For the test case, a laboratory scale concrete cube of size 150×150 × 150 mm was chosen. The CNN model was trained and tested with more than 10,000 images crack and non-crack contours extracted from concrete cube images. The images were acquired with the help of an industrial camera and lens assembly during a compression test of the concrete cube. A novel approachfor detection of crack and other random noise, dents etc. present on the concrete cube surface is discussed in the paper with details. The results show an accuracy of 99% for the detection of crack andnon-crack contours in grayscale concrete images. The proposed method can be used for crack detectionon dif erent surfaces for improved assessment of the structure.
Crack detection, Deep learning, Structural health monitoring, Convolutional neural network.
Rachid Sabre1 and Ias Sri Wahyuni2, 1Laboratory Biogéosciences CNRS, University of Burgundy/Agrosup Dijon, France, 2Faculty of Computer Science and Information Technology, Universitas Gunadarma, Indonesia
The aim of multi-focus image fusion is to integrate images with different objects in focus so that obtained a single image with all objects in focus. In this paper, we present a novel multi-focus image fusion method based using Dempster-Shafer Theory and alpha stable distance. This method takes into consideration the information in the surrounding region of pixels. Indeed, at each pixel, the method exploits the local variability that is calculated from quadratic difference between the value of pixel I(x,y) and the value of all pixels that belong to its neighbourhood. Local variability is used to determine the mass function. In this work, two classes in Dempster-Shafer Theory are considered: blurred part and focus part. We show that our method give the significant result.
Multi-focus-images, Dempster-Shafer Theory, Local distance, Alpha Stable.
Haiying Gao and Chao Ma, Information Engineering University, Zhengzhou, China
Non-zero inner product encryption provides fine-grained access control to private data, but the existing non-zero inner product encryption schemes are mainly constructed based on the problem of bilinear groups and lattices without homomorphism. To meet the needs of users to control private data and cloud servers to directly process ciphertexts in a cloud computing environment, this paper designs a non-zero inner product encryption scheme based on the DCR assumption. Specifically, the access control policy is embedded in the ciphertext by a vector y, and the user attribute vector x is embedded in the secret key. If the inner product of the policy vector y of the encryptor and the attribute vector x of the decryptor is not zero, the decryptor can decrypt correctly. This scheme has additive homomorphism in the plaintext-ciphertext space, and it can be proved to be additive homomorphic and adaptively secure.
Non-Zero Inner Product Encryption, Adaptive secure, Decision Composite Residuosity.
AbeerAlsadhan, Assistant professor in computer science department Imam Abdulrahman bin Faisal university
The protection of digital network system against attacks is pivotal to ensure data and information confidentiality, integrity and availability. Generic attacks sought to obtain the secret key of given cipher-text(s) thereby compromising the confidentiality of the data. The generic attack, among the contemporary network attacks, is a nefarious attack capable of attacking one or a whole group of block-ciphers and can utilizeparallelism to accomplish its task. Among various countermeasures to this attack, machine learning method is useful in detecting a generic attack on any network through a process called classification. A classification procedure entails discriminating whether a packet is executing generic attack or not.Thus, this study implemented homogeneous ensemble of decision trees to detect generic attacks. Two (2) decision forests were implemented using Forest by penalizing attributes and Random Forest algorithms to develop two methods namelyForest by penalizing attribute Generic Attack Detector (FoGAD) andRandom Forest Generic Attack Detector (RaFoGAD). Using an equal amount of generic network packet and normal packets found in UNSW-NB15 dataset, both methods were fitted and primarily evaluated using the cross-validation technique. Also, the fitted models were subjected to secondarily evaluation to ensure their stability, robustness and reliability. FoGADand RaFoGADperformances were measured and reported using all available metrics. FoGAD and RaFoGAD achieved equal primarily accuracy of 99.868, MCC scores of 0.997, False positive rate of 0.001 among others. The secondary evaluation showed a slight but insignificant difference in both methods as FoGADseems to be better than RaFoGAD in terms of accuracy, MCC score and False position rate. Conclusively, the study presents two ensemble machine learning methods for detecting generic attacks.
Generic attack, block cipher, machine learning, network security, Cybersecurity.
Ilnaz Nikseresht, Issa Traore, and Amirali Baniasadi,Electrical and Computer Engineering University of Victoria, Victoria, BC V8P 5C2, Canada
The Activity and Event Network Model (AEN) is a new security knowledge graph that leverages large dynamic uncertain graph theory to capture and analyze stealthy and longterm attack patterns. Because the graph is expected to become extremely large over time, it can be very challenging for security analysts to navigate it and identify meaningful information. We present different visualization layers deployed to improve the graph model’s presentation. The main goal is to build an enhanced visualization system that can more simply and effectively overlay different visualization layers, namely edge/node type, node property, node age, node’s probability of being compromised, and the threat horizon layer. Therefore, with the help of the developed layers, the network security analysts can identify suspicious network security events and activities as soon as possible.
data visualization, security, intrusion detection system, intrusion prevention system.
YanJun Li1, 3 WeiGuo Zhang2 and YaoDong Ge2, 1The 15th Research Institute of China Electronics Technology Group CorporationInformation Industry Information Security Evaluation Center, BeiJing, China, 2Beijing Electronic Science and Technology Institute, BeiJing, China, 3Guangxi Key Laboratory of cryptography and information security, Guilin, Guangxi, China
Since the side channel attack was proposed, it has posed a great threat to the block cipher algorithm. DPA attack (Dif erential power analysis) is one of the most typical and ef ective side channel attacks. Camellia Algorithm is a typical block cipher algorithm, and its hardware implementation security has attracted much attention. From the perspective of S-box, this paper proposes a relatively simple S-boxalgebraic expression of Camellia Algorithm, and gives the S-box of Camellia Algorithm against second- order DPA attack based on this algebraic expression. This scheme uses polynomial basis to construct finite field and combines threshold theory to realize it. This research will have a positive impact on thehardware optimization of S-box of Camellia Algorithm.
Camellia Algorithm, Polynomial basis, S-box, DPA attack, threshold scheme.
Valentina Zhang, Phillips Exeter Academy, Exeter NH, USA
While facial expression is a complex and individualized behavior, all facial emotion recognition (FER) systems known to us rely on a single facial representation and are trained on universal data. We conjecture that: (i) different facial representations can provide complementing views of emotions; (ii) when employed collectively in a discussion group setting, they enable accurate FER which is highly desirable in autism care and applications sensitive to errors. In this paper, we first study FER using pixel-based DL vs semantics-based DL in the context of deepfake videos. The study confirms our conjecture. Armed with these findings, we have constructed an adaptive FER system learning from both types of models for dyadic or small interacting groups and further leveraging the synthesized group emotions as the ground truth for individualized FER training. Using a collection of group conversation videos, we demonstrate that FER accuracy and personalization can benefit from such an approach.
emotion recognition, facial representations, adaptive algorithm, training data ground truth.
Richa Singh*1, P. S. V. Nataraj2, and Arnab Maity1, 1Department of Aerospace Engineering Indian Institute of Technology Bombay, Mumbai - 400076, India, 2IDP in Systems and Control Engineering, Indian Institute of technology Bombay, Powai, India - 400076
Inspired by recent progress in machine learning, a data-driven fault diagnosis, and isolation (FDI) scheme is explicitly developed for failure in the fuel supply system and sensor measurements of the laboratory gas turbine system. A passive approach of fault diagnosis is implemented where a model is trained using machine learning classifiers to detect a given set of fault scenarios in real-time on which it is trained. Towards the end, a comparative study is presented for wellknown classification techniques, namely Support vector classifier, Linear discriminant analysis, K-neighbor, and Decision trees. Several simulation studies were carried out to demonstrate and illustrate the proposed fault diagnosis scheme’s advantages, capabilities, and performance.
Confusion matrix, Fault diagnosis , Gas turbine engine, Machine learning.
Emre Can Kuran1, Umut Kuran2 and Mehmet Bilal Er2, 1Department of Software Engineering, Bandırma Onyedi Eylül University, Balıkesir, Turkey, 2Department of Computer Engineering, Harran University, Şanlıurfa, Turkey
Contrast enhancement is very important in terms of assessing images in an objective way. Contrast enhancement is also significant for various algorithms including supervised and unsupervised algorithms for accurate classification of samples. Some contrast enhancement algorithms solve this problem by addressing the low contrast issue. Mean and variance based sub-image histogram equalization (MVSIHE) algorithm is one of these contrast enhancements methods proposed in the literature. It has different parameters which need to be tuned in order to achieve optimum results. With this motivation, in this study, we employed one of the most recent optimization algorithms, namely, coot optimization algorithm (COA) for selecting appropriate parameters for the MVSIHE algorithm. Blind/referenceless image spatial quality evaluator (BRISQUE) and natural image quality evaluator (NIQE) metrics are used for evaluating fitness of the coot swarm population. The results show that the proposed method can be used in the field of biomedical image processing.
Contrast Enhancement, Coot Optimization Algorithm, Knee X-Ray Images, Biomedical Image Processing.
David Noever1 and Ryerson Burdick2, 1PeopleTec, Inc., Huntsville, AL, USA, 2University of Maryland, College Park, MD, USA
The application of Generative Pre-trained Transformer (GPT-2) to learn text-archived game notation provides a model environment for exploring sparse reward gameplay. The transformer architecture proves amenable to training on solved text archives describing mazes, Rubik’s Cube, and Sudoku solvers. The method benefits from fine-tuning the transformer architecture to visualize plausible strategies derived outside any guidance from human heuristics or domain expertise. The large search space (>1019) for the games provides a puzzle environment in which the solution has few intermediate rewards and a final move that solves the challenge.
Natural Language Processing (NLP), Transformers, Game Play, Deep Learning.
Muhammad Muneeb, Samuel F. Feng, and Andreas Henschel, Department of Mathematics and Department of Electrical Engineering and Computer Science, Khalifa University of Science and Technology, Abu Dhabi, UAE
This article proposes and documents a machine-learning framework and tutorial for classifying images using mobile phones. Compared to computers, the performance of deep learning model performance degrades when deployed on a mobile phone and requires a systematic approach to find a model that performs optimally on both computers and mobile phones. By following the proposed pipeline, which consists of various computational tools, simple procedural recipes, and technical considerations, one can bring the power of deep learning medical image classification to mobile devices, potentially unlocking new domains of applications. The pipeline is demonstrated on four different publicly available datasets: COVID X-rays, COVID CT scans, leaves, and colorectal cancer. We used two application development frameworks: TensorFlow Lite (real-time testing) and Flutter (digital image testing) to test the proposed pipeline. We found that transferring deep learning models to a mobile phone is limited by hardware and classification accuracy drops. To address this issue, we proposed this pipeline to find an optimized model for mobile phones. Finally, we discuss additional applications and computational concerns related to deploying deep-learning models on phones, including real-time analysis and image preprocessing. We believe the associated documentation and code can help physicians and medical experts develop medical image classification applications for distribution.
Image classification, machine learning, medical image classification, mobile phone application, cancer.
Onyebuchi Ekenta and Ming Gu
The CUR decomposition is a popular tool for computing a low rank factorization of a matrix in terms of a small number of columns and rows of the matrix. CUR decompositions are favored in some use-cases because they have a higher degree of interpretability and are able to preserve the sparsity of the input matrix. Previous random sampling-based approaches are able to construct CUR decompositions with relative-error bounds with high probability. However, these methods are costly to run on practical datasets. We implement a novel algorithm to compute CUR approximations of sparse matrices. Our method comes with relative error bounds for matrices with rapidly decaying spectrum and runs in time that is nearly linear in m and n.
CUR Matrix Approximation, data analysis, sparse matrix.