Appify Intelligence - AI Development & Automation Specialists
Home
AI ConsultingAI Augmented Web SolutionsAI Chatbots & AgentsAI AutomationRAG SystemsAI Dashboards
ExpertiseSuccess storiesBlogContact
Appify Intelligence - AI Development & Automation Specialists
Home
AI ConsultingAI Augmented Web SolutionsAI Chatbots & AgentsAI AutomationRAG SystemsAI Dashboards
ExpertiseSuccess storiesBlogContact
Appify Intelligence - AI Development & Automation SpecialistsHomeServicesExpertiseSuccess StoriesContact us

Appify

AI Business solutions experts

Trusted partners in driving innovation, systems automation, business intelligence and sustainable competitive advantage with AI.

Schedule a meeting

Book a free initial consultation with our app development experts and let's discuss your app design and development options.

Book a Call

Business Hours

Monday - Friday:9:00 AM - 5:00 PM
Saturday - Sunday:Closed

Contact

1 800 852 307hello@appify.digital

Head Office

Appify Ltd., Ashfield, Tullamore, Co. Offaly, Ireland. R35 KX60

View on Map

Customer Reviews

5.0(22 reviews)

Jaspal Kharbanda

"What is really impressive was a value-driven engagement with Appify. They genuinely care about delivering quality."

Stephen Gribben

"Appify have become more than just my tech partner... Their communication led to seamless collaboration."

Leave a Review

Find Us

Google MapsGet Directions
LinkedInYouTubeInstagramTikTokFacebook
© Appify Digital 2026
  1. Back to blog
AI Strategy

Redesigning the boring middle

Four AI patterns we ship at mid-market services and B2B firms (5m+ revenue, under 100 staff). From a public Appify talk at The Fold.

For: Operators at mid-market services and B2B firms (5m+ revenue, under 100 staff)

AI
Appify Intelligence Team
|27 April 2026|8 minutes
Laptop, open notebook with a pen, and a glass cup of black coffee on a round white table in warm morning light

A mid-market services or B2B firm in this post means a business past 5m in revenue and under 100 staff. Almost no widely-circulated AI commentary is written for that band. The enterprise stories come from the MIT NANDA report, where 95% of generative AI initiatives showed no measurable profit-and-loss impact six months past pilot. The startup stories come from a founder shipping a weekend prototype that nobody runs as a real production service the next month. Neither describes the band where you have enough revenue to invest seriously and not enough headcount to absorb the failed experiments those stories describe.

On 21 April 2026 at The Fold, the members' club on Spencer Street in London, our CEO Karl Mulligan ran through four AI deployments shipping inside Appify and one of our products. Each one was sized for that mid-market band. The talk landing page is live at fold-appify.netlify.app. The four patterns below are the playbook from that talk.

The boring middle, defined

The deck called this band of work the "boring middle." It covers the operations that grew with the business but never got an owner. Marketing operators producing weekly content by hand. Ops managers pulling reports out of five different tools every Monday. Customer success leads calling back inbound enquiries from a spreadsheet. Sales teams chasing open RFP feeds for ten hours before they find a worthwhile lead.

Three properties give the boring middle away. It is bottlenecked by headcount rather than ambition. The people defending it say "this is how we've always done it" more often than they argue the ROI. And it stays invisible until somebody asks where you would put two more hires tomorrow. That answer is the boring middle. AI in this band of firm earns its keep when you redesign one of those workflows so that the next two hires you would have made do not need to exist.

The same wedge shows up in McKinsey's State of AI 2025, which finds that 39% of AI-using firms attribute any EBIT (earnings before interest and tax) impact to AI, and only 6% qualify as high performers. McKinsey's own line on what separates the 6% is that the gap is "overwhelmingly organisational, not technological." For mid-market firms that is good news. You do not need a thousand-person organisation to build the discipline. You need to pick the right four or five workflows.

The four patterns below are the ones that keep working at this scale.

1. Content engines that ship every day

Cadence is the hardest content problem at this scale. Quality is rarely where firms struggle. One marketing manager and an intern cannot ship a daily output of any quality. They can ship a weekly post that gets buried in a feed. A content engine fixes the cadence problem by treating content as a deployable system rather than a deliverable.

Inside Appify the engine runs a six-stage loop: ingest a feed list, cluster the day's stories, draft a script, voice it with text-to-speech, render the video, publish. The same infrastructure runs our two consumer news products, irelandnews.ie and novinesrbija.rs. It also produces the Appify AI Business Brief, a 60-second daily video summarising the top three technology stories. The marginal cost per episode is near zero, and headline to finished video runs in minutes.

The same shape would carry to other verticals. A law firm could ship a weekly case-law roundup that would otherwise take a clerk 200 pages of reading. A finance team could voice a daily portfolio-company brief filtered by sector. Internal comms could ship a Monday "what shipped last week" video drafted from ticket data. The AI does not need to write well on its own. What changes the headcount math is that a human reviewer at the publishing step is reviewing rather than drafting from scratch.

2. Ask the business

Silos are the dashboard problem at this scale. Visualisation has been a solved problem for a decade. HR data lives in BambooHR or Workday. Project management lives in Linear or Jira. The customer relationship management (CRM) tool is its own world. Financial data sits in Xero or Sage. Asking a question that crosses two of those, like "what is our capacity for new client work next quarter, given the leave requests already filed?", means a human pulling reports from each silo and reconciling them on a spreadsheet.

The deck's second case is what Karl called "Ask the Business." A natural-language layer sits across the existing silos, queries them, and returns a single grounded answer with a chart and a row-level breakdown. The talk demonstrated four asks live: this quarter's KPIs (key performance indicators), current leave requests, resource allocation plotted next quarter, and upcoming public holidays. Each came back as a structured answer with citations to the underlying records instead of a hallucinated paragraph.

Three principles make this kind of dashboard usable at mid-market scale:

  • Every answer shows its working, with row-level sources.
  • A human is one click from the underlying data.
  • The scoreboard the team measures it against is profit. The system gets removed if it does not earn its time.

We covered the same trade-off in our note on what actually moves AI unit economics: a system that costs more than the analyst it replaces is a hobby with a budget code.

3. Voice past the chatbot ceiling

For a services firm with a senior expert at the centre (an executive coach, a clinical lead, a senior consultant), the bottleneck is the expert's calendar. They can take five appointments a day. They have built a body of intellectual property over a career that could engage 500 people if there were a way to deliver it.

Genio is a self-coaching mobile application we shipped with founder Stephen Gribben on top of his executive coaching framework. In 2026 we added a voice agent layer. The voice runs on the OpenAI Realtime API, which went generally available (GA) in August 2025 at $32 per million audio input tokens and $64 per million audio output tokens, roughly six to twenty-four cents per minute of conversation depending on talk ratio. For a session that replaces a piece of expert time, the unit economics work.

A chatbot answers everything you throw at it. A voice agent worth shipping turns most of those moments back into questions. Genio's guardrails, defined inside the OpenAI Agents Software Development Kit (SDK), keep responses to two to five sentences with one question per turn and an instruction to always paraphrase what the user said back. A sample turn from the talk: the user said "I had a rough week with the team," and the agent replied, "Rough how, the kind that drains you, or the kind that makes you second-guess yourself?" The slide caption that holds the whole guardrail together is "two sentences, one question, always paraphrase." The voice does not need to be smarter than the user. It needs to behave like a coach who has read the methodology rather than a search engine that wants to help.

4. Inbound triage in 30 seconds

The cheapest competitive advantage at mid-market scale is calling people back fast. The classic Harvard Business Review piece on online sales leads by Oldroyd, McElheran, and Elkington audited 2,241 firms and found that those who responded within an hour were seven times more likely to qualify a lead than those who waited an hour longer, and 60 times more likely than those who waited 24 hours or longer. The average response time the study measured was 42 hours.

The lead-qual pattern from the deck closes that gap. A web form takes a name, number, and brief. An AI voice agent calls the prospect back inside 30 seconds, asks a short qualifying script (six data points in the deployment Karl demonstrated), attaches a transcript to the lead record, scores it, and routes the call to the right human based on skills and load. The human picks up a warm record with the work already done.

The deck's framing on this slide was the line we keep returning to: "AI qualifies. A human closes." For a mid-market firm with one or two outbound salespeople doing the work of three, the gain rarely shows up as a conversion-rate spike. What changes is where the salespeople spend their hours: on the 20% of leads that warrant a human conversation, while the other 80% never reach the queue.

What it costs in 2026

The economics moved in the past 12 months and they keep moving. Voice was historically the most expensive interaction modality to operate at scale. In 2026 it is production-cheap. The OpenAI Realtime API price cut in August 2025 brought a typical conversation in at well under a dollar. The news engine runs at near-zero marginal cost per episode because the human review is the only step that does not scale with output. The dashboard layer's running cost is dominated by the model calls per question, which sit in single-digit cents at typical use.

Two cautions are worth naming. Tooling churn is the new constant: the deck for the talk was built in Anthropic's Claude tooling released the Friday before the event. Any "we use X tool" decision will be revisited in 90 days. And model pricing is climbing for the frontier tier even as it falls for the steady-state tier. Build the workflow so the model is a swappable component rather than a hard dependency.

What still goes wrong

Three failure modes show up at mid-market scale.

No internal AI ambassador

Pulling lead engineers off billable work for 1.5 days a week is cheaper than running an agency-led pilot, but only if there is somebody on staff who reads the release notes and brings new tooling back to the team. Without that role the firm runs the same toolchain for 18 months and watches the wedge close.

The "Wild West" pattern

Every employee uses a different AI subscription and the company has no shared guardrails. The fix is not to mandate one tool. It is to publish a recommended approach per use case, with an open path for any team member to propose an alternative when they have a better one.

IP ownership

Operators in this band should not assume the supplier owns the intellectual property. We default to client ownership of any code we ship for engagements at this scale, and any partner you engage should be able to do the same.

We covered the closely-related question of why most enterprise AI pilots stall in an earlier post on why 95% don't move the P&L. The shape of the failure is the same at mid-market scale, and so is the fix.

How to start

The first workflow to pick is the one where the next two hires you would otherwise make never get hired. That is the boring middle's own definition turned into a selection rule: if the workflow does not free up headcount you were planning to spend, it is not the right one to redesign first.

Run the experiment in four to six weeks. Keep a human at every decision point. Measure it against a profit number rather than a productivity feeling. Kill or reshape it at the two-quarter mark if it has not paid back.

If any of the four patterns above looks like a problem you have, the first conversation is free. We work with mid-market firms across Ireland, the UK, and the EU on exactly this kind of redesign. Read more on our AI consulting page, or send us a note and we will book a call.

Tagged

ai-strategyai-automationmid-marketvoice-agents

Ready to talk?

If this post maps to a problem you're hitting, we'd like to hear about it. We turn AI experiments into production systems.

Start a conversation

Related articles

Office desk with a blank notepad, pen on top, and a laptop to one side

AI Strategy

Why 95% of enterprise AI pilots don't move the P&L (and what shipping teams do)

The viral MIT finding has been widely misread. Three 2025 studies point to the same four disciplines that separate the teams shipping AI into production.