RapDev
RapDev Innovation & Technology Culture
RapDev Employee Perspectives
What is the unique story that you feel your company has with AI? If you were writing about it, what would the title of your blog be?
Advancements in AI are becoming larger and more frequent by the day. There’s no shortage of bold claims about what it can do, and while the conversation often swings between hype and hesitation, the real story lies somewhere in between. We here at RapDev are happy positioning ourselves right in the middle. We’re ServiceNow Elite and Datadog Premier partners, so we have the best seats in the house watching how businesses try to adapt. The enthusiasm is real. Leaders want to leverage AI, improve efficiency and future-proof their operations. But the challenge lies in leveling that ambition with the rigid systems they have relied on for years — sometimes decades.
It’s not that businesses are resisting change. It’s that change is difficult when day-to-day operations require predictability, stability and compliance. If we were to write a blog, I’d want to title it “Mind the Gap: Pushing Dreams to Prod” since we’re positioned to turn big dreams into impactful outcomes.
What was a monumental moment for your team when it comes to your work with AI?
The shift from “what can we automate?” to “what kind of decision-making can we replicate?” was a massive one. The immediate draw of AI is to automate processes, but processes could already be automated. That wasn’t new. We had to step back and break down how we approach problems as engineers so we could onboard a large language model to sit in the seat next to us and trust it to do the job just as well.
Suddenly, we’re not measuring success based on code reaching the end of its flow without erroring out. We’re measuring how well an agent can understand intent, adapt to variation and make a decision that has you responding with a nod, “I was thinking the same thing.”
What challenges did your team overcome in AI adoption?
Our customers’ challenges are also our challenges. When we encounter highly manual or rigid processes, they’re not just inefficiencies we can work around or even fix. They’re shared pain points that we feel too, as they’re often too complex or nuanced for static scripting to handle effectively.
Sighing and saying, “there’s got to be a better way to do this,” is a tale as old as time — but AI has changed the rules. Challenges that once felt invincible, the kind that would eat as much code as you’d throw at them, suddenly become solvable when you can develop solutions that adapt to the messiness of the real world. Where we used to see barriers, we’re now seeing possibilities.

What’s your rule for fast, safe releases — and what KPI proves it works?
My rule is: Make every release small, reversible and observable. If a change can’t be traced, tested and rolled back in minutes, it doesn’t ship. We lean on trunk-based development, test CI gates, automated change creation/approvals via ServiceNow DevOps and low‑friction ChatOps approvals; then let Datadog telemetry decide whether we continue, canary or roll back. The KPIs are the DORA set (deployment frequency, lead time, change failure rate, mean time to recovery) and the percent of changes auto approved with a complete audit trail. When the system is healthy, velocity rises without trading off safety.
What standard or metric defines “quality” in your stack?
Quality in my stack is defined by “change confidence,” which is the degree to which we can predict a change will behave in production before it ever ships. Practically, that’s a blend of correctness, operability and maintainability enforced by the toolchain. The metric I anchor on is escaped defects per release, normalized by change size, paired with SLO impact (error budget burn attributable to deployments). If we’re building quality, both trend down even as deployment frequency rises. We drive it with explicit standards, like test coverage, contract tests for integrations, static analysis/security scanning as non-negotiable gates and progressive delivery (canaries/feature flags) with automated rollback tied to Datadog monitors. A release is “high quality” when it’s boring, meaning it passes gates, is observable by default and doesn’t move SLOs in the wrong direction.
Name one AI/automation that shipped recently and its impact on your team or the business.
We recently shipped Arlo for Remote Execution with a specific customer: an AI agent paired with a lightweight endpoint agent on end user PCs. Arlo can now issue a command that the PC agent securely picks up via an outbound only connection, then automatically runs a guarded runbook collecting diagnostics, executing repair scripts and confirming the fix with structured results back to Arlo. We treated it like production automation from day one — allow‑listed, signed scripts, least‑privilege execution, rate limits, approval hooks for sensitive actions and a complete audit trail so every step is reviewable. This leads to faster triage and resolution for common endpoint issues, fewer escalations and a better employee support experience. In practice, “time to diagnose” moved from back-and-forth chats to near instant signals and a meaningful slice of tickets were resolved end to end without human intervention, freeing support/IT engineers to focus on high leverage work while improving SLA adherence.
