Technology for frictionless interaction

Virtask is the technology provider for Anne4Care. We build and integrate software systems with focus on helping elderly people and people with dementia. Our flagship product is Anne, an Avatar that can speak and listen and more importantly takes care of the enduser. We strongly believe that it's not about technology itself but about making technology available in a simple and intuitive manner. Along with our partners we have been working on our core technologies for the past ten years, we have rolled them out in various countries in Europe and we are proud to showcase our innovative products.

Show me! Or read about our technology

Our core technology

Our team has been building the core technologies on which our products are based for a decade now. The main ingredients that we combine are speech, touch events and a real life avatar. Our Avatar technology allows us to portray a real life human in accelerated 3D - male or female and with a whole host of true human emotions. The character can have its own mood; so it is possible for our avatar to have a really happy day. Having an avatar that can speak and listen allows for simple and direct communication with our end users - it is an easy way for many people, and especially elderly to communicate with complex software and hardware. Our speech recognizer understands a multitude of languages but more importantly it is able to detect low level speech changes - we are developing a deep learning solution where a latent system is constantly analyzing the speech patterns to detect various pathologies and moods. The client software is composed of our Avatar system and modules that can be optionally enabled; news, radio, medication reminders, calendar, games, media, etc. Each module is both speech and touch enabled - and all are straightforward in usage. In the backend we have a host of services running; we have a dashboard that allows care takers to enter specific information for end users - anything from calendar or medication events up to changing the speed of speech of the avatar or volume. There is also output from the client that is visualized on the dashboard - giving insight in how the end user is doing. Our whole system has been successfully audited in 2019 by a european entity on GDPR compliance.

Speech recognition

Our ASR engine is able to provide accurate and fast speech recognition. It runs off line; that means data never leaves the device and we can assure privacy for our users. We currently support 15 languages.

Speech synthesis

Our speech synthesis is compatible with all major voice publishers that means we can provide high quality voices both male and female in 30 languages

Avatar rendering

We have a unique Avatar rendering engine which allows us to render high quality 3D character on the screen which can talk, smile, wink and have a variety of emotions. Lip movement is synced to the speech synthesis as it provides naturally looking speech.

Deep learning

Our core engine gathers all kinds of data from both speech (ASR) and interaction patterns of the end user, which in turn are analyzed to detect trends and help care takers act faster if something seems wrong.

FAQ

We are fully GDP complaint and have been audited in 2019 with positive results.
Currently our solution is 90% portable (works on Android, Linux and windows) but the core technology is currenlty running on windows. We are in the process of making the last part portable too. Clients get a touch based tablet on which they can run the Avatar software.
We measure interaction with the user (if they gave permission to do so). Touch interaction (pressing buttons, swiping, etc), Timing of various events but also voice analysis. We do not only analyze the actual words people say but also syntax and semantics of the formed sentence, and even deeper - analyze the voice usage - tone, timbre, muffled speech etc. We are in collaboration with the Radboud University to retrieve even more data from the low-level voice data. All these streams of data contain markers, markers that can indicate whether there is declines in well-being, combined marker analysis by our deep learning framework gives detailed insight into the well-being of the client. As an example, if the scores of games played go down over a few weeks, the client is asking the Avatar for the time 8x per day versus 1 time 4 weeks earlier, and we detect more slurred speech that would trigger a certain marker. The strength of the marker and the combination of other markers in turn indicates either possible cognitive decline, anxiety, or stress level changes. There are two goals that we strive for (1) telling the informal carer that we detected change in the wellbeing of the client and extract detailed information and visualize that to allow the carer to gain more insight into the situation. And (2) automatically have the Avatar adapt itself to the new situation; that means she may speak more slowly, start using different words or sentences, display more/less emotion and even ask direct questions 'You seem somewhat distracted today, can I help you with something?'. The interaction in turn changes what we measure - this tight feedback loop is where our IP is at - we pride ourselves on perfecting this technology so it can actually achieve better understanding of our clients and in turn help informal (and formal) carers provide more high-quality help because they have a second opinion from our technology framework.

Some of our partners

Working together and sharing ideas with strong partners keeps us sharp

Microsoft
ABB
Engie
Central point
iHomelab
MedicineMen
Health Valley
Radboud University
Saxion University of applied science
ZonMW
Op Oost
Oost NL
Provincie Gelderland
Deventer
Zutphen
AAL
EU
Terz Stiftung
CWZ
Alifa
Sint Maarten Zorggroep
iMean
De parabool
CWZ
Bonacasa
ZoZijn
Lang zult U wonen