Friends of Dentro

Updates, learnings & insights from our work in AI. Edition 4.


Welcome to the fourth edition of Friends of Dentro! 🎉

We set up this newsletter to keep you in the loop of what we’re up to at Dentro. Our work with clients, our own products and everything along the way. A mix of learnings, thoughts and things we’re working on. For anyone interested in AI, digital products and Dentro in general.

In this edition we will be, among other topics, talking about how our AI employee Paula processes our call recordings and our experience with hiring our first additional team member at Dentro. We'll also recap our last weeks and share insights on what else we’ve been up to recently.

Thanks for reading along!

Btw, if you have questions about one of the topics mentioned or want to learn more, just reply to this email and we’re happy to assist.

Paula acting as a slack team member
(and getting a new surname)

In case you’re somewhat following AI news, you might have heard of AI agents - no worries if not! - which are one of the hottest topics right now. Agents in this context meaning all kind of systems, that are able to perform actions more or less autonomously. You can think of it as an employee who does certain tasks for you. The possibilities are sheer endless.

In this newsletter edition, we want to share one practical example we built for ourselves. It’s less an agent than a automated workflow, but we think it’s simple enough to familiarize oneself with the core concept.

Setting the scene: We record all our internal calls, with the intention to create protocols which we can refer to later. Nothing very complex, but a tedious task nonetheless if you do it manually.

Perfect to have our AI employee Paula handle it from now on.

An overview of what the agent flow looks like.

Our flow works like this:

  • She continuously watches certain Google Drive folders.

  • Once we upload a new recording mp3 file, the process gets triggered automatically and Paula starts working.

  • At first, she sends the mp3 to Assembly for transcription. We tested a bunch of different tools, but Assembly seems to work best for German language. The result is a long text file with everything we talked about.

  • After that, she stores this text file in a predefined Google Drive folder in case we need to come back to it in the future.

  • Then she passes the whole text to an LLM (OpenAI’s gpt4.5preview in this case) and creates a protocol of our calls. Fully with all talking points and any resulting tasks we should take care of.

  • This is then send to our Slack channel as a message from Paula.

The whole process just takes a couple of minutes and we do not have to do anything else other than uploading the recording file to Google Drive.

In one of our calls we briefly discussed a potential new surname for Paula. She pro-actively picked this up in her call notes and responded to it! And this is how Paula got her new surname ‘Poderosa’ 😉

This is just one of many examples of how AI can help companies automate manually annoying tasks. Of course it can get much more complex for other use cases, and we’ll keep sharing more examples in the months to come.

Our experience with hiring our first team member

Dentro is growing 🚀 So it was time to get some extra help and hire our first team member, a full-stack engineer to primarily support with building our own software products and create beautiful and well-functioning user interfaces.

As this was our first hire at Dentro, we had to figure things out along the way. And it went better than expected, here’s a rough recap:

  • Day 0: Created a job ad. We tried to include anything that’s important to us and clearly state what’s a hard requirement and what’s optional. And sprinkle in a bit of humour, of course.

  • Day 1: We still had an unused 3-day free trial for a premium job listing on Linkedin, so we published the job there. Tried to look for other suitable platforms as well, but quickly gave up as there doesn’t seem to be anything that compares in terms of price and reach.

  • Day 3: Deactivated the job again, as we had already collected a total of 101 applications within 48 hours. That should be enough.

  • Day 3 - 5: As this was quite a bit more than we’d expected, we quickly drew up a system that allowed us to review and rank all CVs in an organized way. We both went through all CVs independently and rated them as ‘Yes’, ‘Maybe’ or ‘No’. All those we’ve both rated ‘Yes’ or ‘Maybe’, passed to the next round. A total of 28. We then talked intensively about all 28 and eventually nailed it down to 6 applicants we invited for an interview.

  • Day 6 - 13: Did interviews with all 6 applicants. Each around 1.5 hours in length, covering a few of the usual questions as well as a coding interview. We intentionally wanted to keep things quick and easy for both us and the applicants, so the plan was to only have this one interview with each and then hopefully be able to make a decision. It was a very diverse mix of results, with some applicants leaving a really good impression and others feeling like scammers really 😅

  • Day 14: In order to keep things a little objective, each applicant was ranked from 1 - 10 in the categories ‘Knowledge’, ‘Ownership and Responsibility’, ‘Communication’, ‘Team Fit’ and ‘Experience’ by both of us right after the interview. And voilĂĄ, we had a winner! 🎉

Alexandra started with us this month. She is a self-thaught full-stack developer with a knack for building well-functioning web applications. Apart from knowing her way around her tech stack, she convinced with very high agency and drive - as going down the self-thaught route is never an easy one - and will be a great fit for our team. We’re excited to see the impact she will make on our work at Dentro!

PS: While going through the CVs, Paul noted down a few of his thoughts in form of a thread on X. So if you’d like some unfiltered insights, you’ll find them here: https://x.com/paul_dentro/status/1901312301577154704

Tweet of the month:
Shopify makes AI use mandatory

Did you know that the CEO of Shopify, the dominant software player in the ecommerce space with a market cap north of 122bn EUR, is German? Tobi LĂźtke emigrated to Canada many years ago and subsequently founded Shopify there.

But that’s just a side note, let’s move on to the actual topic: Tobi recently published an internal memo he wrote about his thoughts on AI’s implications on his business. And it’s well worth a read!

Its title already sets the tone: “Reflexive AI usage is now a baseline expectation at Shopify”.

Wow!

Why?

Because he kind of turns the narrative around. Many companies (including a lot of those we talk to), ask the question: “Why should I use AI?”. But maybe the question should be “Why don’t we use AI?”. And this question should be omnipresent, asked again and again for practically any task and process. So it becomes more like “Why don’t we use AI for that?”.

Many times the result will be that AI can’t help with this certain process (yet). But try it nevertheless and from time to time you will stumble upon really good AI use cases that help gradually automating your business operations. Here’s the link to the whole memo: https://x.com/tobi/status/1909251946235437514


Building activities 🛠

  • Further improved AI phone assistant for a client. We are using Vapi for this with custom post-processing of the call data. In addition to being able to answer questions it now can access the caller’s current time and assist in more depth with certain inquiries.

    We noticed that quite a few callers mistake the AI for a simple mailbox and just throw single words at it (“real human”, “customer service”, “invoice” etc.) – in those cases, our assistant now stresses the fact that they can talk with it like with a human.

    All in all definitely a step up in terms of user experience.


  • Build a PowerAutomate Flow to process first level support emails. As a client of ours is using Microsoft, we wanted to give PowerAutomate a go. Built a flow that processes all emails that arrive in a shared inbox, sends a reply, determines the correct department and sends them an email in case they need to act on it manually, and then archives the email in a predefined folder.

    As usual with Microsoft products, the buggy interface was the biggest challenge in the end, but now everything works and is implemented in practice.

  • Machine learning model finally producing good results. In another client project, we are working with millions of lines of data in order to build a prediction model from scratch. After two months we’ve finally got a state which can be used in practice (for which we’ve set up a custom API along the way). This was the first model of a variety of models we’ll build upon the same data set.

  • Automated web scraping and embedding for our RAG pipelines. We’ve been using Graphlit for this so far, but now we have the whole flow in a custom automation. It regularly checks all URLs of a domain, scrapes the content, embeds it and store it in a vector store (Pinecone in that case). This helps our apps to stay up to date and answer based on current information.

  • Finished hardware benchmarks for local AI setup. For a client with very high data security needs, we benchmarked a full AI setup (LLMs, embedding models, transcription options, hardware, etc.) – with the goal to find out what they’ll need to build on-premise in order to use AI, as running this in the cloud is not an option for them. After weeks of work, all benchmark tests are done and we know what we’re going to run with. In case you want to know more, have a look at this YouTube video in which we explain things in detail.

    Next step: on this tech stack, build a NotebookLM-Clone that runs in the cloud, uses the company’s data and then can be replicated in a on-premise setup.

  • Offer creation for construction companies taking a turn. Since more than a year we’re playing with the idea to build an automated offer calculation tool for construction companies.

    The process in short: input offer positions —> check relevant prices —> calculate each position —> output final offer. We got this up and running and working quite well – turned out construction companies are somewhat reluctant when it comes to innovation though 😅 

    So we have to make the entry barrier as easy as possible. In a recent conversation with a new lead we discovered that there’s public pricing tables online that are actively used by companies in this industry. This changes things for us, as it would allow us to build a product that can be used out of the box instead of one that requires a lengthy individual setup.

    Next steps: making sure the prices we calculate are sufficiently correct and then automating the whole process from start to finish.

Misc 👨‍💻

  • Struggled finding a European cloud GPU provider. As we work with large datasets at times, local model training often won’t do the trick. So we rent cloud servers for that. Unfortunately, European providers are very rare 😕 We therefore clean the data first and process on US servers, but will soon try with Scaleway as an European alternative.

  • Did a data residency review of all our software. Most tools we used are by US providers, subsequently processing and storing data in the US. Due to recent developments, we wanted to ensure all our data stays within the EU. Easier said than done. We pay additional fees for some tools and had to rule out using others completely. Got a good core setup by now but will need to continue meticulously reviewing privacy policies for anything we want to use in the future.

    A summary of our rather underwhelming options.


  • Switched to Slack for internal communications. This was one of the results of the data residency review. We pay a little more, but keep all our data within the EU.

  • Decided on a new tech stack for our custom projects. React on the frontend, FastAPI on the backend, and postgres as a database. For this we're exploring the exquisite full stack template from FastAPI, that has all of these technologies and more. Alexandra is currently exploring its details and whether this is something we can build upon. The goal still being to have one tech stack that we build everything on, in order to stay as efficient as possible when developing new applications.

  • Had our infrastructure assessed by an outside expert. As mentioned in our last newsletter, we wanted to make sure that all our infrastructure is on par with reasonable security standards. In order to be on the save side, we contracted an outside expert to evaluate our setup. Most was already very well, but got a few improvements suggestions nevertheless, so totally worth it.


  • Organic traffic finally picking up a bit. We write blog posts and create SEO-focused industry pages on our main domain, in order to get some additional traffic on our website. We roughly tripled our impressions and clicks this year and hope to keep growing for the months to come.

    Organic impressions gradually growing this year.


And this wraps up the fourth edition of Friends of Dentro. Thanks so much for being a part of it and reading along as Dentro evolves!

Questions, feedback, or anything else you’d like to tell us? We’re happy to read it all
– just hit ‘Reply’ on this email.

All the best,
Paul & Paul from Dentro

PS: You can share this newsletter with anyone who might be interested, just send them this link: https://friendsofdentro.beehiiv.com/subscribe