Higher Education Category
Entry ID
788
Participant Type
Team
Expected Stream
Stream 2: Identifying an educational problem and proposing a prototype solution.

Section A: Project Information

Project Title:
SignAI: Sign Languages’ Artificial Interpreter (Two-way Sign Language Interpreter, Interpret the “signal” between deaf and hearing)
Project Description (maximum 300 words):

Deaf — is difficult in communication. Not easy to communicate with normal people. Not easy to receive formal education and study normally. Illiterate people are also significantly higher. (Luckner et al., 2005; Lederberg et al., 2012; Cannon & Guardino, 2012; Naseribooriabadi et al., 2017; Paul & Alqraini, 2019; GLOBO, 2023)

We are Separated, Bordered, and Muted. Not allowed to talk.

According to the Hong Kong Census and Statistics Department, we have 246,200 persons with hearing difficulty. And the World Health Organization also mentions that by 2050, nearly 2.5 billion people will have hearing problems. (Education Bureau, n.d.; The Government of the Hong Kong Special Administrative Region - Press Releases, 2023; World Health Organization: WHO, 2025)

We aim to solve the communication difficulties. We strive to connect 2 Worlds.

Roughly in 2 directions:
1. Recognize the sign → text
2. Hear sound → text → sign animation

In today's tools like SSLIA 龍耳手譯寶, subtitles website, or a real person as a sign-language interpreter, are all having difficulties, such as not being in real time, being unstable, and being costly. (香港政府新聞網, 2017; 龍耳SILENCE, 2018; De Meulder & Haualand, 2019; Schniedewind et al., 2020; 王, 2025)

However, these could be solved in our SignAI.

Why is our work important and critical? Because they also need to socialise and talk anywhere and anytime, whether deaf or hearing. (Kersting, 1997; Wauters & Knoors, 2007; Hankins, 2012; Batten et al., 2013)

And, we can bring a timely interpreter to those who need it.
Let them hear,
Let them listen,
Let them talk.

File Upload

Section B: Participant Information

Personal Information (Team Member)
Title First Name Last Name Organisation/Institution Faculty/Department/Unit Email Phone Number Current Study Programme Current Year of Study Contact Person / Team Leader
Mr. Chun KWOK EdUHK MIT s1153938@s.eduhk.hk 56606613 Bachelor's Programme Year 3
  • YES
Mr. Ka Lai TAM EdUHK MIT s1154180@s.eduhk.hk 55191017 Bachelor's Programme Year 3
Ms. I Man LAM EdUHK MIT s1154178@s.eduhk.hk 91597854 Bachelor's Programme Year 3

Section C: Project Details

Please answer the questions from the perspectives below regarding your project.
1.Problem Identification and Relevance in Education (Maximum 300 words)

This is because when I was in middle school, I knew a classmate who was almost deaf, but she refused to go to a school that generally focused on people with hearing problems and came to our regular school. She was my deskmate, and she helped me understand some of the deeper issues concerning the deaf. It made me realize that there are so many deaf people in this society, but we can't see them on our ordinary streets. They are constantly present in our society. Just like the forgotten souls who died in the last century, there may be many of them, everywhere, but we can not see them daily.

And also, another team member, TAM, watched a movie called "The Way We Talk". He felt very sympathetic towards the experiences of the deaf people and felt that action needed to be taken. He thought a lot about those with little hearing problems, but whose future was far behind ordinary people's. We immediately shared the same concerns about the difficulties that deaf people faced in communicating with us and created this project.

2a. Feasibility and Functionality (for Streams 1&2 only) (Maximum 300 words)

We built all of these based on existing tools because these basic things are the most stable.

In these 2 directions,
1. Recognize sign languages → text
2. Listen the sound → text → sign animate

We used the following tools:
1. Pycharm CE

We may use additional tools in the future to further develop this technology and project.

For example, tested and tried to use Roboflow, etc., and studied various existing papers (LSTM, GCN, etc.) and projects like SSLIA. We may also explore the possibility of using Kivy and BeeWare to turn our applications into apps and Flask's WebView.

We could also, as future studies, review SignLLM, AuralLLM, and SignMST-C's papers to implement our project better.

In Sign Language Recognition.

We used these libraries:
1. OpenCV (object detection)
2. MediaPipe
3. JSON
4. NumPy

We record hand landmarks for training gestures and match live gestures against trained data.

By importing JSON.
We can read an image from a webcam and capture it to JSON. format file.
And detect motion and display the word in the top left corner.
(See the detector demo videos in our PDF (clickable!))

In Sound-to-Text-to-Sign Animation,
We create 3D modelling using the software Blender in order to capture hand-sign motion and change it to animation.

We try this way below to let the 3D model export to FBX. Format the file and then connect to the Python code.

Classify human layout into different parts, because we are focused on the hand, so we classify the hand into 20 parts.
(See the detector demo videos in our PDF (clickable!))

After creating those animation videos, we capture the animation to a MOV. format file, and when the system detects user input using keyboard or using speech, the system will display the same file name animation.

We plan to provide it to people who are deaf and provide feedback on more detailed improvement requirements.

We will use questionnaires and conversations to evaluate its effectiveness

2b. Technical Implementation and Performance (for Stream 3&4 only) (Maximum 300 words)

N/A

3. Innovation and Creativity (Maximum 300 words)

There are a lot of tools to care for the deaf in our society, but surprisingly and sadly, most of the current tools have many major shortcomings. For example, the government provides many appointment services to care for deaf people in society, but even these are free and require extremely long queues. There are also many large events where there are no queues. These sponsorships that use a lot of public funds cannot help the deaf communicate and interact with others in real time. Therefore, we planned this device, which is intended to help deaf people actually communicate.

Our idea is to use local computing to analyze gestures and respond to each other, which can solve many problems smoothly without queues, money, and the internet.

Another unique and innovative point is that many deaf people are illiterate because they cannot easily learn a lot of knowledge in special education schools, which leads to a limited education for some people. Some people only know how to use sign language. Our project can convert text back into sign language, so they can learn sign language by themselves, and those who cannot read can also understand sign language. It also adds a sense of intimacy to the deaf people.

(香港政府新聞網, 2017; 龍耳SILENCE, 2018; De Meulder & Haualand, 2019; Schniedewind et al., 2020; 王, 2025)

4. Scalability and Sustainability (Maximum 300 words)

In terms of scalability, our model can be suitable for various sign languages, not just Hong Kong sign language, because it only plays animations based on color-changing gestures. Other sign languages ​​are also possible.

We have also thought about these issues in the direction we chose at the beginning, such as the following language families, I have listed as examples:
1. Shanghai Sign Language
2. French Sign Language (the mainstream sign language in the United States and Canada, there should be more online data. Some groups in Hong Kong also use it.)
3. Beijing Sign Language (There should be more databases.)
4. National Universal Sign Language (It is a new sign language established by the government in 2018, which will unify China in the future. I also ordered the book, but I probably didn’t have it in hand before the competition deadline.)
5. Alphabet Sign Language (string it out letter by letter. We used the letter notation of American Sign Language.)
6. English, Australian, and New Zealand Sign Language (there should be relatively a lot of information.)
7. Japanese Sign Language (There should be relatively more information on the Taiwan side.)
8. International Sign Language (used in international settings, there should be more information.)
9. Nicaraguan Sign Language (Linguists prefer this the most; the database should be especially plentiful.)

In terms of sustainable development. Because this measure is deployed locally, SignAI can satisfy an infinite number of people. Even if the number of users continues to grow nationally or globally, we will not occupy the network. Because our measure is to calculate locally, analyze locally, and provide local feedback on those texts and animations. He doesn't consume anything, so he is infinitely durable.

5. Social Impact and Responsibility (Maximum 300 words)

Our primary beneficiaries, of course, are both the deaf people and the hearing people.

By allowing the deaf to use their strengths to communicate with us and connect two separate worlds, we can make our world more diverse, and have more new blood and more power to refer to and support each other. It can help us better integrate into their society, and they can also better integrate into our society. Let every ordinary person be able to enter the deaf society and learn about their situation, and let every deaf person be able to appear, participate, communicate, and improve the society of hearing people. Let our two separated worlds once again merge into one big common society, so that we all have equal opportunities to communicate. Let us communicate with each other fairly and inclusively at any place and any time, and share our different opinions. Live together in the same society, instead of being two completely separated societies with little communication with each other as it is today.

That’s our goal. Addressing specific social issues, improving the lives of key beneficiaries, and aligning with broader societal goals such as equity and inclusion

Do you have additional materials to upload?
Yes
PIC
Personal Information Collection Statement (PICS):
1. The personal data collected in this form will be used for activity-organizing, record keeping and reporting only. The collected personal data will be purged within 6 years after the event.
2. Please note that it is obligatory to provide the personal data required.
3. Your personal data collected will be kept by the LTTC and will not be transferred to outside parties.
4. You have the right to request access to and correction of information held by us about you. If you wish to access or correct your personal data, please contact our staff at lttc@eduhk.hk.
5. The University’s Privacy Policy Statement can be access at https://www.eduhk.hk/en/privacy-policy.
Agreement
  • I have read and agree to the competition rules and privacy policy.