How we built in-app chat for our patients

And what we learnt as an engineering team
Written by

One of our guiding philosophies at Eucalyptus is to help our patients achieve positive change — to live and maintain a healthier life. As a health-tech company, “digitising” profound lifestyle change has been one of our biggest challenges. During the first iteration of this digitisation, we weren’t seeing the uptake, nor the impact he had hoped for. In our understanding of human-to-human interaction being vital to this change, we realised it was actually our job to digitise that, and prepare the system for scale. This has become one of the core concepts behind our Patient Adherence Loop (PAL).

While the behind the scenes of how we worked to optimise patient journeys isn’t so glamorous (hint, it involved a bajillion meetings) we decided to start at the beginning with a patient and practitioner chat experience. It was clear we needed a simple tool to enable one-to-one communication between our health coaching team and our patients.

In-app chat would ensure health coaching, on the right, would feed into all future patient interactions.

The road to an MVP

We wanted to build something quickly. We had a team of health coaches ready to go, a mobile app that had recently launched and a product team who were keen to learn more about our patients.

Right from the outset, the team had three main concerns:

  1. Building a great chat experience is really hard. Patient expectations are high, with many of them being used to experiences such as iMessage, WhatsApp and Messenger — all robust chat platforms, with hundreds of engineers working to refine them.
  2. If we were going to rival these, there is a lot to build. Moderation, reporting, socket infrastructure… the list goes on. And that’s before we consider how it integrates on our end.
  3. How are we going to achieve the above when we are a small team?
One of my favourite illustrations showing the career of an engineer

A good engineer can design a robust chat architecture, but the best engineers can hunt for these constraints early, and simplify their solution to match.

Our Staff engineer, Geoff Cowan, drove a 4-week foundational build of a chat experience, that we’re still raving about. Why? Because it met the brief: “A simple tool to enable 1:1 comms between our health coaching team and patients.”

Getting into the details

We went with a third-party, white-label chat product, called Stream, and built a lightweight micro-service around it. The responsibilities between the two were clear:

  • Stream drives messaging, replies, reactions, avatars — all the high quality features you expect out of a messaging experience
  • Our service drove the creation of threads and assigning users (and health coaches) to those threads, whenever a patient joined our program

Out of potential third parties, we chose Stream for a few reasons, mainly around the ease of integration and the amount of functionality that came out of the box. Being a healthcare company, we have set GDPR & data privacy requirements we need to meet, and Stream just kept ticking our boxes.

I can almost feel you grimace at the thought of integrating with a third party, but, given the chance, we would do it again 1000 times over. Again, we’re a healthcare company, not a social media company. As a business, Stream is focused on providing a great white-label chat experience - whereas it’s just one of the many capabilities we provide our patients with. Turns out, the wheel works pretty well. Sometimes it makes sense to build, and sometimes to buy. We always consider both.

Took us one day to get a prototype in the app and over the coming week we polished it.
And we did the same for our health coaches too.

Okay, MVP has launched… now what?

Feedback time! Working at Euc is pretty neat, in that we have many types of users. Our patients are the obvious ones, but our internal team of clinical practitioners - in in-app chat’s case, health coaches - use the tools we build for them to help patients, so their feedback is key.

We found a few main themes from practitioner feedback:

  1. Firstly, they loved it
  2. Their workflows when using chat were… less than ideal.

    They had to flick back and forth between our existing admin portal to get information about the patient because it was only surfaced there. To solve this, we took our doctor platform, a once simple tool used for prescribing, and extended it to support multiple practitioner types. It now houses nurse practitioners, doctors, and health coaching workflows, each being presented with all the relevant information about the patient.

    They were struggling to manage large patient loads. You think a group chat pops off? Imagine 1000 patients per coach. We rethought how we prioritised patients, and built a simple queuing algorithm for them, which ordered patients based on a few factors.
  3. Patients who were deemed high priority were not being prioritised over our more “happy path” patients. We extended the queue with an “Outreach” view, meaning we could prioritise proactive support while tending to reactive replies from the rest of our patients.

This was great - especially from an engineering perspective because they all shipped separately in the weeks following that MVP launch, which is so important for managing workflows.

Often startups in 0-1 mode can spend all their time building big feature after big feature. But the truth is, our MVPs will never be perfect, and it’s inevitable V2, 3, and 4 will come. It’s important to carve out time for this in your future roadmap.

Engineers always preach “Make your code extendable”, because there’s no better feeling when that day comes - and you can ship improvements or extensions just as easily as you’d hoped. We’re big fans of this at Euc.


Small iterations are good, but knowing when to pull the trigger on bigger bets is equally as important. Once chat had been in the wild for three months, some of the earlier pain points were exacerbated by scale, and we needed better solutions.

  1. We gather a lot of information about our patients, but it takes a lot of time to interpret, and answer, the ultimate question: “Where is this patient at?”
  2. Any service provided by humans has the issue of time being finite. We can’t be proactively reaching out to every patient, every week. So we needed better ways to tag and prioritise them.

Introducing… trends! A pretty cool piece of tech, that provides an overview of patient behaviours and themes. We make sure to address any concerning trends which emerge, such as plateaus, and prioritise certain trends in the queue. If you’d like to learn more about our Trends engine, check out this article which goes deep into the build. Plus, videos!

What did we learn?

I know personally I learned a bunch with this project. Some I’ve called out above, but furthermore:

  1. Much of my career was spent in growth-engineering, where I was always iterating. It’s super important to swing big, and then iterate, and have the instinct to know when the time is right for each.
  2. Feedback from our patients is gold, and we have a responsibility to do something with it.
  3. Third parties aren’t scary. Sure, integrations can be hard sometimes (we recommend vetting their API docs before you even speak to them), but it’s important to remember what problems your company is solving and focus your engineering team on those.
  4. It took some long, tense discussions to get this right. These conversations can be uncomfortable, but in the end, it pays off because we’re all super proud of what we built.


Ryan Turnbull
Engineering Team Lead