dc.rights.license | In Copyright | en_US |
dc.creator | AboEitta, Abdelrahman | |
dc.date.accessioned | 2023-05-18T13:18:40Z | |
dc.date.available | 2023-05-18T13:18:40Z | |
dc.date.created | 2023 | |
dc.identifier | WLURG038_AboEitta_CSCI_2023 | |
dc.identifier.uri | https://dspace.wlu.edu/handle/11021/36215 | |
dc.description | Thesis; [FULL-TEXT FREELY AVAILABLE ONLINE] | en_US |
dc.description | Abdelrahman Aboeitta is a member of the Class of 2023 of Washington and Lee University. | en_US |
dc.description.abstract | Artificial Intelligence (AI) has experienced rapid growth in recent years and is receiving unprecedented attention due to the impressive results achieved in various domains, notably computer vision and natural language processing. Two of the key technologies that enabled these groundbreaking results were (1) Convolutional Neural Networks (CNNs), designed for image and video processing, and (2) Backpropagation, an algorithm for training AI models. However, despite their remarkable capabilities, these technologies face criticism for several reasons. First, CNNs are limited in their applicability to numerous real-world problems, primarily due to their high computational
and memory requirements. Furthermore, backpropagation is criticized for being biologically implausible, as it fails to represent how the human brain works. To address the limitations mentioned above, this thesis introduces a novel neuromorphic approach that exploits two biologically-inspired technologies: (1) Dynamic Vision Sensor (DVS), an event-based camera that consumes only 0.1% of the power required by traditional cameras while generating thousands of frames, and (2) Hyperdimensional Computing (HDC), a learning algorithm that avoids backpropagation by imitating the brain. By integrating technologies inspired by the human brain, the primary goal of this research is to develop more efficient and adaptable AI systems that can handle various real-world problems, overcoming the constraints faced by CNNs and backpropagation. To demonstrate the effectiveness of this approach, I propose an AI classifier for recognizing two simple hand gestures built using HDC technology and trained using data captured by a DVS. The model managed to achieve a high accuracy rate of 99%. Despite its simplicity, my approach serves as a compelling proof-of-concept, showcasing the potential of combining biologically-inspired technologies (DVS and HDC) to achieve substantial gains in efficiency and performance. | en_US |
dc.format.extent | 32 pages | en_US |
dc.language.iso | en_US | en_US |
dc.rights | This material is made available for use in research, teaching, and private study, pursuant to U.S. Copyright law. The user assumes full responsibility for any use of the materials, including but not limited to, infringement of copyright and publication rights of reproduced materials. Any materials used should be fully credited with the source. | en_US |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en_US |
dc.subject.other | Washington and Lee University -- Honors in Computer Science | en_US |
dc.title | Hyperdimensional Computing for Gesture Recognition Using a Dynamic Vision Sensor | en_US |
dc.type | Text | en_US |
dcterms.isPartOf | RG38 - Student Papers | |
dc.rights.holder | AboEitta, Abdelrahman | |
dc.subject.fast | Artificial intelligence | en_US |
dc.subject.fast | Back propagation (Artificial intelligence) | en_US |
dc.subject.fast | Neural networks (Computer science) | en_US |
local.department | Computer Science | en_US |
local.scholarshiptype | Honors Thesis | en_US |