Show simple item record

dc.rights.licenseIn Copyrighten_US
dc.creatorAboEitta, Abdelrahman
dc.date.accessioned2023-05-18T13:18:40Z
dc.date.available2023-05-18T13:18:40Z
dc.date.created2023
dc.identifierWLURG038_AboEitta_CSCI_2023
dc.identifier.urihttps://dspace.wlu.edu/handle/11021/36215
dc.descriptionThesis; [FULL-TEXT FREELY AVAILABLE ONLINE]en_US
dc.descriptionAbdelrahman Aboeitta is a member of the Class of 2023 of Washington and Lee University.en_US
dc.description.abstractArtificial Intelligence (AI) has experienced rapid growth in recent years and is receiving unprecedented attention due to the impressive results achieved in various domains, notably computer vision and natural language processing. Two of the key technologies that enabled these groundbreaking results were (1) Convolutional Neural Networks (CNNs), designed for image and video processing, and (2) Backpropagation, an algorithm for training AI models. However, despite their remarkable capabilities, these technologies face criticism for several reasons. First, CNNs are limited in their applicability to numerous real-world problems, primarily due to their high computational and memory requirements. Furthermore, backpropagation is criticized for being biologically implausible, as it fails to represent how the human brain works. To address the limitations mentioned above, this thesis introduces a novel neuromorphic approach that exploits two biologically-inspired technologies: (1) Dynamic Vision Sensor (DVS), an event-based camera that consumes only 0.1% of the power required by traditional cameras while generating thousands of frames, and (2) Hyperdimensional Computing (HDC), a learning algorithm that avoids backpropagation by imitating the brain. By integrating technologies inspired by the human brain, the primary goal of this research is to develop more efficient and adaptable AI systems that can handle various real-world problems, overcoming the constraints faced by CNNs and backpropagation. To demonstrate the effectiveness of this approach, I propose an AI classifier for recognizing two simple hand gestures built using HDC technology and trained using data captured by a DVS. The model managed to achieve a high accuracy rate of 99%. Despite its simplicity, my approach serves as a compelling proof-of-concept, showcasing the potential of combining biologically-inspired technologies (DVS and HDC) to achieve substantial gains in efficiency and performance.en_US
dc.format.extent32 pagesen_US
dc.language.isoen_USen_US
dc.rightsThis material is made available for use in research, teaching, and private study, pursuant to U.S. Copyright law. The user assumes full responsibility for any use of the materials, including but not limited to, infringement of copyright and publication rights of reproduced materials. Any materials used should be fully credited with the source.en_US
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en_US
dc.subject.otherWashington and Lee University -- Honors in Computer Scienceen_US
dc.titleHyperdimensional Computing for Gesture Recognition Using a Dynamic Vision Sensoren_US
dc.typeTexten_US
dcterms.isPartOfRG38 - Student Papers
dc.rights.holderAboEitta, Abdelrahman
dc.subject.fastArtificial intelligenceen_US
dc.subject.fastBack propagation (Artificial intelligence)en_US
dc.subject.fastNeural networks (Computer science)en_US
local.departmentComputer Scienceen_US
local.scholarshiptypeHonors Thesisen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record