Hendersen launches experimental AI assistant


Hendersen launches experimental AI assistant

We are pleased to inform you that Hendersen Taxand launches our AI assistant on an experimental basis. First of all, you can play around with it at

What is it?

Year 2023 is the year of AI. With the emergence of various Large Language Models (“LLM”), picture/video generation models and various related applications, people in various trades are experiencing the impact of AI on how they live and how they work.

While many people may have doubt about the effectiveness of AI in handling some complicated tasks traditionally handled by human, changes are probably already on the way...

Thus, our app aims to conduct an experiment using AI in the professional field. It utilizes our knowledge and experiences generated in our professional work, establishing an AI knowledge base and provide an easy-to-use human-AI interface to access such knowledge and experiences.

The Issue

A major issue regarding using AI in the professional field is "AI hallucination". The term "AI hallucination" refers to a phenomenon in which an AI model perceives patterns or objects that are nonexistent or imperceptible to humans, resulting in nonsensical or inaccurate outputs. In human language-AI lies, sometimes.Well, even frequently.

This type of AI hallucination is often seen in large language models such as GPT and can occur due to various factors such as biases in the training data, unexpected inputs, or errors within the model itself. These hallucinations can lead to incorrect predictions and sometimes generate harmful content such as hate speech or misinformation.

This is not acceptable in the professional field!

How it Works?

While building AI models using deep learning techniques where the model learns by itself without any prior human intervention has shown immense potential in several applications like natural language processing (NLP), image recognition etc., such models sometimes suffer from errors due to lack of generalization abilities on unseen examples. In such cases, incorporating relevant external knowledge through a knowledge graph or ontology during model training phase is often found helpful.

Meanwhile, at application level, a possible solution is to use an external knowledge base as context when AI addresses a specific question. In this regard, vector embedding is a technique used in natural language processing (NLP) to map words or phrases onto numerical vectors. The process of vector embedding involves representing words as high-dimensional vectors, where each dimension corresponds to a specific feature.

The main goal of vector embedding is to capture the semantic meaning and context of words within a language corpus. By doing so, it enables machine learning algorithms like neural networks and decision trees that work with numerical data inputs to understand the meaning behind textual input.

By transforming your knowledge to a vector database, it also provides a means for AI to search the most relevant context to specific questions. Unlike traditional search, which is based on words, the vector search is based on meaning.

Based on our experiment, AI hallucinations can be reduced significantly using the correct context, i.e., knowledge base by implementing the vector database store.

In summary, we are building up our knowledge base, make it understandable by AI, and let AI do the job.

Looking forward

There are other means to improve the quality of AI responses. Reinforced learning, fine tuning using LORA, prompt engineering, etc. These will be rolled out step by step in our experiment.

The use of AI as Agent is another important area to look into. The process involves creating a system that can interact with humans in a natural and effective way, performing defined tasks on behalf of the user. With detailed planning, AI can perform much more complicated tasks in a very near future.



Copyright © 2004- 2023 | Hendersen Consulting Co. Ltd. All rights reserved.