Please use this identifier to cite or link to this item:
http://hdl.handle.net/1893/35988
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Swingler, Kevin | - |
dc.contributor.author | Johnston, Penny | - |
dc.date.accessioned | 2024-05-06T10:37:52Z | - |
dc.date.available | 2024-05-06T10:37:52Z | - |
dc.date.issued | 2023-09-30 | - |
dc.identifier.citation | P. Johnston, K. Nogueira and K. Swingler, "GMM-IL: Image Classification Using Incrementally Learnt, Independent Probabilistic Models for Small Sample Sizes," in IEEE Access, vol. 11, pp. 25492-25501, 2023, https://doi.org/10.1109/ACCESS.2023.3255795 | en_GB |
dc.identifier.citation | P. Johnston, K. Nogueira and K. Swingler, "NS-IL: Neuro-Symbolic Visual Question Answering Using Incrementally Learnt, Independent Probabilistic Models for Small Sample Sizes," in IEEE Access, vol. 11, pp. 141406-141420, 2023, https://doi.org/10.1109/ACCESS.2023.3341007 | en_GB |
dc.identifier.uri | http://hdl.handle.net/1893/35988 | - |
dc.description.abstract | This research is motivated by the challenge of providing accurate and contextually relevant answers to natural language questions about visual scenes, particularly in support of individuals with visual impairments. Neural-Symbolic computing aims to unlock the potential of both the robust learning capabilities found in neural networks and the reasoning and interpretability of symbolic representation through their integration. This thesis introduces a Neuro-Symbolic Incremental Learner designed specifically for the Visual Question Answering Task. The system incrementally learns visual classes and symbolic facts to answer natural language questions about visual scenes. Using Deep Learning, a feature space is created from which visual classes are learnt as independent probability distributions. This allows for the easy addition of new classes even with limited data, mitigating the catastrophic forgetting typical of traditional neural networks. The incorporation of classification by category allows visual classes to not be limited to just objects but can also include other categories such as attributes. A knowledge graph stores facts about regions of interest, detailing; objects, attributes, actions, locations, and inter-relations, facilitating the incremental addition of knowledge. This allows facts to be stored explicitly and added incrementally. Leveraging a large language model, the system translates natural language questions into knowledge graph queries, ensuring a fluid visual question-answering experience. | en_GB |
dc.language.iso | en | en_GB |
dc.publisher | University of Stirling | en_GB |
dc.subject | Neuro-Symbolic | en_GB |
dc.subject | Incremental Learning | en_GB |
dc.subject | Deep Learning | en_GB |
dc.subject | Autoencoders | en_GB |
dc.subject | Gaussian Mixture Models | en_GB |
dc.subject | Digital Assistant | en_GB |
dc.subject.lcsh | Deep learning (Machine learning) | en_GB |
dc.subject.lcsh | Machine learning | en_GB |
dc.subject.lcsh | Artificial intelligence | en_GB |
dc.subject.lcsh | Predictive control | en_GB |
dc.subject.lcsh | Vision disorders | en_GB |
dc.subject.lcsh | Optical materials | en_GB |
dc.title | A Neuro -Symbolic Incremental Learner Model for the Visual Question Answering Task | en_GB |
dc.type | Thesis or Dissertation | en_GB |
dc.type.qualificationlevel | Doctoral | en_GB |
dc.type.qualificationname | Doctor of Philosophy | en_GB |
dc.author.email | PS-Johnston@hotmail.com | en_GB |
Appears in Collections: | Computing Science and Mathematics eTheses |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Thesis_2631677.pdf | Thesis pdf (Sept 2023) | 11.13 MB | Adobe PDF | View/Open |
This item is protected by original copyright |
Items in the Repository are protected by copyright, with all rights reserved, unless otherwise indicated.
The metadata of the records in the Repository are available under the CC0 public domain dedication: No Rights Reserved https://creativecommons.org/publicdomain/zero/1.0/
If you believe that any material held in STORRE infringes copyright, please contact library@stir.ac.uk providing details and we will remove the Work from public display in STORRE and investigate your claim.