As a remote team here at PressW, meetings are a crucial part of our everyday workflow. Having dedicated times where we can communicate directly with our coworkers and our clients are paramount to keeping our projects on track.
However, meetings can be a bit of a double-edged sword when youโre a small team. We have to be careful about spending too much time in meetings - we need time to complete our everyday work, after all!
When we did an analysis of our time spent, we found that on average our team members found themselves meeting with other coworkers, clients, and prospective leads for a total of more than 15 hours every week!
As AI practitioners, we naturally began drafting up a smart solution to mitigate that issue: an AI system to automatically create custom meeting summaries! When a member of our team wants to stay in the loop on a project but wonโt be contributing to the conversation in a meeting, reviewing a meeting summary instead can save a ton of time. Weโve also fine-tuned our custom solution to highlight action items and important project updates to maximize how useful it is for our team.
The final product has two major pieces that make it tick:
An automatic transcriber to listen in on our meetings and write down everything we discuss
An AI pipeline that reads through the full transcription and creates a summary that meets our full specifications
Letโs do a quick dive into how those work and how you can quickly make your own summarizing bot!
The Automatic Transcriber
Transcribers have been around for a while, so we elected to use an already existing option in Fireflies.ai. Fireflies is great because you can add their bot to a meeting ahead of time and itโll pop into the meeting before it starts, quietly save a video and transcript of the meeting, and then upload all of that information where your team can access it anytime. Crucially, it also keeps track of who was speaking for each part of the conversation, which makes reading through the transcript a lot easier for both ourselves and our AI. Fireflies also creates its own meeting summaries if youโd like to try those out yourself!

See the notetaker, "Fireflies.ai Notetaker", in the top right? Thatโs our transcriber!
AI Summarizing Pipeline
While Fireflies already produces high quality meeting summaries, we wanted to take them one step further and tailor our summaries to be specifically designed for our workflows. After a few rounds of brainstorming and tinkering, we created an application that utilized LLMs (large language models, which power products like ChatGPT) to meet all of our needs. After our summarizing pipeline finishes processing our meetingโs transcription, it spits out three different resources for our team:
A full list of action items

Our team wanted to be able to quickly reference our meeting summary for action items that we could turn into tickets for our sprints. Our action item compilation pipeline splits up the meeting transcript into smaller pieces so that the individual pieces are fully digestible by LLMs. Then, the LLM spits out a list of action items for each section of the meeting and combines them all into one comprehensive list that our developers can easily read through at any time.
Questions from the meeting

If a client or team member had a question that we werenโt able to answer in the meeting, we wanted to be able to quickly address that without having to look back at the full transcript for context. Just like with action items, the pipeline splits up each section of the meeting into processable chunks for the LLM. We instruct the LLM to review those sections for any questions that didnโt get a strong answer in the moment, and then save those questions along with any pertinent details to a list of questions for our team to take a look at.
โญ Pro Tip!
If you try this yourself and find that the LLM isnโt doing a great job of remembering key details, we recommend trying out the following:
Use a โsmarterโ model like GPT-4o if youโre using a base model. While it does come at a greater cost, these models are considered to be smarter for a reason - theyโre better at following your instructions and remembering key pieces of context when you put them to work. If you find that only using a smarter model isnโt sustainable for all of your meetings, consider using a baseline model like GPT-4o-mini to review each chunk of the meeting for unanswered questions first, and then bring in your smarter model to extract the key details when it matters.
Have your LLM iterate over the material multiple times. LLMs, just like their human counterparts, can create a much better product by taking their โfirst draftโ and improving on it. After your LLM first outputs the questions it finds, try sending it those questions back along with the meeting transcript again, with the request that it pull out more context about those questions. This time, itโll do a much better job of finding those extra crucial details you need about those questions. If youโre seeing an improvement but youโre still not satisfied with the results, you can repeat this process again until itโs found everything you need in the transcript. You can also artificially recreate this process in a single LLM call using "chain-of-density" prompts.
Why does this work?
Why doesnโt the LLM just get it right the first time? One unique quirk that you may not know about LLMs is that they only read through your request once before they respond! If we looked under the hood of your favorite LLM, weโd find that its architecture is built around the concept of โattentionโ - as an LLM reads through the request you send it, it internally saves the parts of the request it should pay โattentionโ to when it responds. The biggest difference between a โdumberโ and a โsmarterโ model is how much information it can pay attention to before it starts to lose track of details when it responds.
The first time the LLM reads through your meeting summary, itโll mostly just be focused on the task of finding questions, and it wonโt have as much bandwidth to keep track of all the context surrounding them. However, when you return with your already extracted list of questions, itโll be able to use those questions as reference for what parts of the meeting it should focus on, and locate those details in a much more effective way.
A detailed summary of the meeting
Finally, our pipeline creates a fully fleshed-out meeting summary that our team can quickly review for client sentiment and other key details. Since full meeting transcripts are too long to reasonably pass into an LLM for a detailed summary, we have the LLM iterate over smaller chunks of the meeting transcript individually to create detailed mini-summaries. Once those are small enough to collectively pass to an LLM, we ship them off to be combined into one full, detailed summary that can be reviewed in less than a minute.
The Results (ROI Highlights)
Now, our meeting summary AI fully enables our team members to stay in the loop on projects that they arenโt actively leading. For some of our team, thatโs over half of their meetings! Thanks to the meeting summary AI, each of our team members saves at least 5 hours a week every week. We also rearranged our meeting schedules at the same time, and now our developers are able to schedule all of their meetings in the morning, giving them the full afternoon every day to focus on their dev work without any interruptions.
We were also able to fully stand up our meeting summary AI in less than two weeks, making it one of our fastest projects ever! Because we developed it in-house, weโve made it fully configurable to work for us. Now, it uses Zapier to automatically detect when a meeting has ended, pulls all of the meeting information from Fireflies, and then outputs the summarized information into Notion where we keep all of our team resources.
If youโre interested in your own teamโs AI solutions (big or small!), donโt be afraid to reach out to us at PressW! Weโre a custom LLM shop that prides itself for being on the forefront of what you can do with new AI technology. If youโre curious about some of our other, larger-scale projects, check out our other case studies here - or, schedule a meeting with our CEO here!
