Technology for frictionless interaction

Virtask is the technology provider for Anne4Care. We build and integrate software systems with focus on helping elderly people and people with dementia. Our flagship product is Anne, an Avatar that can speak and listen and more importantly takes care of the enduser. We strongly believe that it's not about technology itself but about making technology available in a simple and intuitive manner. Along with our partners we have been working on our core technologies for the past ten years, we have rolled them out in various countries in Europe and we are proud to showcase our innovative products.

Show me! Or read about our technology

Our core technology

Our team has been building the core technologies on which our products are based for a decade now. The main ingredients that we combine are speech, touch events and a real life avatar. Our Avatar technology allows us to portray a real life human in accelerated 3D - male or female and with a whole host of true human emotions. The character can have its own mood; so it is possible for our avatar to have a really happy day. Having an avatar that can speak and listen allows for simple and direct communication with our end users - it is an easy way for many people, and especially elderly to communicate with complex software and hardware. Our speech recognizer understands a multitude of languages but more importantly it is able to detect low level speech changes - we are developing a deep learning solution where a latent system is constantly analyzing the speech patterns to detect various pathologies and moods. The client software is composed of our Avatar system and modules that can be optionally enabled; news, radio, medication reminders, calendar, games, media, etc. Each module is both speech and touch enabled - and all are straightforward in usage. In the backend we have a host of services running; we have a dashboard that allows care takers to enter specific information for end users - anything from calendar or medication events up to changing the speed of speech of the avatar or volume. There is also output from the client that is visualized on the dashboard - giving insight in how the end user is doing. Our whole system has been successfully audited in 2019 by a european entity on GDPR compliance.

Speech recognition

Our ASR engine is able to provide accurate and fast speech recognition. It runs off line; that means data never leaves the device and we can assure privacy for our users. We currently support 15 languages.

Speech synthesis

Our speech synthesis is compatible with all major voice publishers that means we can provide high quality voices both male and female in 30 languages

Avatar rendering

We have a unique Avatar rendering engine which allows us to render high quality 3D character on the screen which can talk, smile, wink and have a variety of emotions. Lip movement is synced to the speech synthesis as it provides naturally looking speech.

Deep learning

Our core engine gathers all kinds of data from both speech (ASR) and interaction patterns of the end user, which in turn are analyzed to detect trends and help care takers act faster if something seems wrong.

Some of our partners

Working together and sharing ideas with strong partners keeps us sharp

Microsoft
ABB
Engie
Central point
iHomelab
MedicineMen
Health Valley
Radboud University
Saxion University of applied science
ZonMW
Op Oost
Oost NL
Provincie Gelderland
Deventer
Zutphen
AAL
EU
Terz Stiftung
CWZ
Alifa
Sint Maarten Zorggroep
iMean
De parabool
CWZ
Bonacasa
ZoZijn
Lang zult U wonen