Talk 1 on Relevant Topic in Your Field
Published:
A list of all the posts and pages found on the site. For you robots out there is an XML version available for digesting as well.
Published:
Published:
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Short description of portfolio item number 1
Short description of portfolio item number 2
Published in Association for Computing Machinery (ACM) Symposium DocEng, 2018
Extracting key terms from technical documents allows us to write effective documentation that is specific and clear, with minimum ambiguity and confusion caused by nearly synonymous but different terms. We present a method for Automatic Term Extraction (ATE) for the technical domain based on the use of part-of-speech features and common words information that has shown comparable or better results to the reported state of the art results.
Recommended citation: "Automatic Term Extraction in Technical Domain using Part-of-Speech and Common Word Features", ACM Symposium DocEng 2018, Nisha Simon and Vlado Keselj, August 2018, Article No.: 51, pp 1–4. https://doi.org/10.1145/3209280.3229100
Published in The International Conference on Automated Planning and Scheduling (ICAPS) , 2021
The goal of this preliminary work is to predict the next completion in PDDL code, based on previous and surrounding text. The ability to generate PDDL code will be extremely useful to PDDL practitioners for the purpose of solving planning problems. It further opens the door to providing a source of inspiration for the modeller. The main contribution of our approach is a language model built using Recurrent Neural Networks (RNNs) that is trained on existing PDDL domains, which can be used to generate PDDL-like code
Recommended citation: "A Natural Language Model for Generating PDDL", Nisha Simon and Christian Muise, Proceedings of The International Conference on Automated Planning and Scheduling (ICAPS) 2021, KEPS Workshop, August 2021, pp 1–8. https://icaps21.icaps-conference.org/workshops/KEPS/Papers/KEPS_2021_paper_7.pdf
Published in The International Conference on Automated Planning and Scheduling (ICAPS) , 2022
We explore how automated planning can be applied to Natural Language text generation in order to create narratives (stories) that are coherent and believable. Our work represents a key first step towards the novel application of planning technology to a neuro-symbolic approach for effective story generation.
Recommended citation: "TattleTale: Storytelling with Planning and Large Language Models", Proceedings of The International Conference on Automated Planning and Scheduling (ICAPS) 2022, SPARK Workshop, Nisha Simon and Christian Muise, June 2022, pp 1–8. https://icaps22.icaps-conference.org/workshops/SPARK/papers/spark2022_paper_2.pdf
Published in Twenty-Seventh AAAI/SIGAI Doctoral Consortium, Association for the Advancement of Artificial Intelligence (AAAI) , 2024
While recent work has shown that LLMs can successfully be used for narrative generation, they lack coherence and can be prone to repetition and stilted language. Automated Planning can therefore be combined with Natural Language text generation to create narratives (stories) that are logical, coherent, and believable.
Recommended citation: "Does Robin Hood Use a Lightsaber?: Automated Planning for Storytelling",Twenty-Seventh AAAI/SIGAI Doctoral Consortium, Association for the Advancement of Artificial Intelligence (AAAI), Nisha Simon, February 2024, pp 1–2. https://doi.org/10.1609/aaai.v38i21.30411
Published in The 37th Canadian Conference on Artificial Intelligence (Canadian AI), 2024
We combine the fields of Automated Planning and Text Generation to show how text-based interactive Choose Your Own Adventure (CYOA) stories can be created using LLMs in conjunction with Fully Observable Non-Deterministic (FOND) based Automated Planning.
Recommended citation: "Want To Choose Your Own Adventure? Then First Make a Plan.", The 37th Canadian Conference on Artificial Intelligence (Canadian AI) 2024, Nisha Simon and Christian Muise, May 2024, pp 1–6. https://caiac.pubpub.org/pub/d2ujhb4x
Published in The 37th Canadian Conference on Artificial Intelligence (Canadian AI), 2024
By combining Automated Planning with Natural Language Text Generation by LLMs we can create logical, believable, and coherent stories that can be used in a wide variety of domains, for a large range of applications.
Recommended citation: "Large Language Models are Incoherent Storytellers.", The 37th Canadian Conference on Artificial Intelligence (Canadian AI) 2024, Nisha Simon, May 2024, pp 1–5. https://caiac.pubpub.org/pub/3w466klp
Published:
Extracting key terms from technical documents allows us to write effective documentation that is specific and clear, with minimum ambiguity and confusion caused by nearly synonymous but different terms. For instance, in order to avoid confusion, the same object should not be referred to by two different names (e.g. “hydraulic oil filter”). In the modern world of commerce, clear terminology is the hallmark of successful RFPs (Requests for Proposal) and is therefore a key to the growth of competitive organizations. While Automatic Term Extraction (ATE) is a well-developed area of study, its applications in the technical domain have been sparse and constrained to certain narrow areas such as the biomedical research domain. We present a method for Automatic Term Extraction (ATE) for the technical domain based on the use of part-of-speech features and common words information.
Published:
The goal of this preliminary work is to predict the next completion in PDDL code, based on previous and surrounding text. Generating valid PDDL code is a key component in creating robust planners. Thus, the ability to generate PDDL code will be extremely useful to PDDL practitioners for the purpose of solving planning problems. It further opens the door to providing a source of inspiration for the modeller. The main contribution of our approach is a language model built using Recurrent Neural Networks (RNNs) that is trained on existing PDDL domains, which can be used to generate PDDL-like code.
Published:
We demonstrate the use of a planning model that provides scaffolding to an LLM so that its language generation is context dependent in order to create more coherent and believable stories in a variety of domains.
Published:
Automated Planning can therefore be combined with Natural Language text generation to create narratives (stories) that are logical, coherent, and believable. A planning model provides scaffolding to an LLM so that the LLM’s language generation is context-dependent, in order to allow users to create more coherent, logical, and believable stories in a variety of domains.
Published:
A survey of techniques that combine Automated Planning with Natural Language text generation to create narratives (stories) that are logical, coherent,and believable. Guest lecture for course CMSC 491/691 - Interactive Fiction and Text Generation at UMBC.
Published:
Issues of bias and unreliabilty in LLMS, and how they can be mitigated using a verification step such as a valid plan.
Undergraduate course, University 1, Department, 2014
This is a description of a teaching experience. You can use markdown like any other post.
Workshop, University 1, Department, 2015
This is a description of a teaching experience. You can use markdown like any other post.