• An Agent OS needs to think about thinking

    The opinions expressed in this article are my own and do not necessarily reflect those of my clients or employer. I use Anthoropic’s Claude to do most of my writing.

    In an earlier post, my argument was that AI agents are much better at helping us do the thing than doing the thing themselves, and that before they can do much of anything useful, they need somewhere to live. An operating system. Trust, memory, orchestration, all the boring plumbing that makes autonomy actually work.

    That was about moving agents from task execution to something closer to judgment. Navigating real workflows instead of running isolated demos. The reactions were interesting, but not in the way expected. Nobody really wanted to talk about orchestration. What people kept asking, in various polite and less polite ways, was: OK, but can these systems (help us) think better? Not think faster. Not automate the thinking we’re already doing. Actually think in ways we currently don’t.

    Which, it turns out, is a much harder question. And the answer probably has less to do with AI than with something embarrassing about how human reasoning actually works.

    The decision that won’t get made

    Here’s a situation. You’re on a leadership team. The question on the table is whether to embed generative AI into your core product. The first-mover faction says speed wins. The fast-follower faction points out (correctly, with examples) that plenty of companies moved early and spent two years building on the wrong architecture. The wait-and-see faction wants more data. Everyone has a slide deck.

    Months pass. The decision doesn’t get made. Or (and this is the more common and worse outcome) it gets made for bad reasons. Someone was more persuasive in the last meeting. A board member mentioned what a competitor was doing. People got tired.

    The standard explanation is “analysis paralysis,” which sounds like a diagnosis but is really just a description. The team had too much information and too many options and couldn’t decide. Sure. But why couldn’t they decide?

    Here’s a theory: the team wasn’t stuck because they had too many options. They were stuck because everyone was reasoning about the problem the same way, and nobody could see it. It’s hard to notice that you’re trapped in a single thinking framework while you’re using it, in the same way it’s hard to notice you have an accent. You just sound normal to yourself.

    Analysis paralysis, in a lot of cases, is a reasoning-mode problem. You’re locked into one way of thinking when the situation is screaming for three or four.

    If that’s right, it matters a lot for where Agent OS goes next.

    Five thinking approaches, one brain

    There are, broadly speaking, five reasoning frameworks that show up in most serious decision-making. Most of us are fluent in one, maybe two. Over a longer course of time, we use most of them. We reach for them the way you reach for a familiar coffee mug: not because it’s the best vessel for the job, but because it’s there and it’s yours.

    First Principles Thinking strips a problem to raw materials and rebuilds from scratch. If you listen to any tech podcast, this is the most popular buzzword bingo entry. The famous example is Elon Musk looking at the cost of a rocket, deciding the market price of components was basically a fiction, and asking what the underlying metals and carbon fiber actually cost per pound. That’s how you get SpaceX.

    The less famous application is something like a telco facing margin pressure on its wireless network. The standard move is to optimize: renegotiate spectrum leases, densify towers, squeeze more efficiency out of existing infrastructure. First principles asks a ruder question. Why does the infrastructure need to be terrestrial at all? Low-earth orbit satellites change the economics entirely. You don’t get there by optimizing towers. You get there by asking whether towers are the right unit of analysis, which nobody in the room was doing because everyone in the room grew up building towers.

    Analogical Thinking is pattern-matching across domains. I do this all the time. My teams are probably tired of me saying “we are chefs, tools are ingredients in a pantry…”

    A streaming platform looking at subscriber churn might study mobile gaming engagement loops. The content has nothing in common, but the churn dynamics are structurally the same.

    Where this gets fun is when the analogy is weird. A telco trying to reduce call center volume would naturally look at what other telcos do. But Netflix basically solved a version of this problem by replacing reactive customer support with predictive personalization that fixed the friction before anyone picked up a phone. This “literally” (I know I’m using the word… and I mean it) happened to me back in 2015. I call it “the best interaction I never had“. And build a proactive service offering out of that experience. Nobody in telecom was looking at a content company for call center strategy, which is exactly why it’s a useful place to look.

    Systems Thinking maps how things connect to each other. The natural lens for a tech company evaluating cloud resilience or a telco doing network capacity planning.

    The non-obvious application: a media company negotiating content licensing deals. This looks like procurement. But each licensing decision quietly reshapes what the recommendation algorithm serves up, which changes viewer behavior, which alters what content gets greenlit next, which changes negotiating leverage for the next round of deals. It’s a feedback loop masquerading as a purchase order. Many media companies negotiate these like they’re buying office supplies.

    Design Thinking starts with “how does this feel to a human being.” When a telco sunsets 3G or migrates customers to a new billing platform, the playbook is all systems engineering: cutover timelines, device compatibility matrices, capacity planning. Design thinking asks one question nobody else in the room is asking: what happens to the 80-year-old customer whose flip phone just stopped working? Suddenly the migration isn’t a technical project. It’s a churn problem, a brand problem, and (depending on how many 80-year-olds you have) potentially a regulatory problem. No Gantt chart is going to catch that.

    Critical Thinking stress-tests everything else. A media company debating a big bet on AI-generated programming might have pitch decks that look fantastic and competitive pressure that feels urgent. Critical thinking is the person in the room (usually not a popular person in that moment) who asks: what has to be true for this to work at scale, and what evidence do we actually have beyond our own pilots? That question doesn’t kill the momentum. It redirects it toward the bets that might survive contact with actual audiences.

    Where it breaks down

    So here’s the problem. When you’re deep in a first-principles decomposition, you don’t naturally pause and think, “Actually, is there just a good analogy for this that would take five minutes instead of five weeks?” When you’re mapping a system’s interdependencies, you can lose hours modeling feedback loops and completely miss that the underlying assumption needs to be questioned, not modeled.

    The framework you know best feels right. It always feels right. That’s what makes it your framework. And the idea of simultaneously running five different reasoning approaches, holding their outputs in your head, noticing where they agree and disagree, and then synthesizing all of that into a decision. Look, there are probably people who can do this. They are not most people. They are not most teams. It’s not a skills problem; it’s a bandwidth problem.

    The issue isn’t picking the right framework. The issue is that the situation needs several frameworks running at once, and we don’t have the cognitive hardware to do that.

    The algorithm for thinking about work, applied to thinking itself

    There’s a useful parallel here. Musk has what he calls “the algorithm”: a five-step process he reportedly repeats to an annoying degree in production meetings at Tesla and SpaceX. The steps: question every requirement. Delete any part or process you can. Simplify and optimize what’s left. Accelerate the cycle time. And only then, automate.

    The order is the point. Most organizations jump straight to step five. They try to automate a process that nobody has bothered to question, simplify, or delete. Musk’s own admission: “The big mistake in Nevada and at Fremont was that I began by trying to automate every step. We should have waited until all the requirements had been questioned, parts and processes deleted, and the bugs were shaken out.”

    Now apply that same sequence not to a production line, but to how organizations reason through decisions:

    Question every requirement. Which thinking frameworks are actually being used? Who decided this is a systems problem, and why? Is the team defaulting to financial modeling because the CFO is in the room, or because financial modeling is actually the right lens?

    Delete. Which lines of reasoning are producing noise rather than signal? If the analogical pass keeps generating comparisons to companies in unrelated industries with different unit economics, maybe that pass needs to be dropped for this particular problem.

    Simplify. Of the frameworks that are producing useful output, where are they agreeing? Agreement across frameworks is a strong signal. That’s where the team should focus instead of trying to reconcile every divergence.

    Accelerate. How quickly can the team cycle through multiple reasoning modes on the same question? Right now, this takes weeks of meetings. What if it took hours?

    Automate. And only now, after the reasoning process itself has been questioned, trimmed, and simplified, does it make sense to ask whether agents can take on parts of it.

    This is where the interesting question lands: can agents actually do these things?

    Not in the abstract. Concretely. Can an agent question whether a team is stuck in a single reasoning mode? Can it delete an unproductive line of analysis? Can it simplify by identifying where three different reasoning approaches are converging on the same conclusion? Can it accelerate the cycle from weeks to hours by running multiple reasoning passes in parallel?

    The answer, today, is a qualified yes, if the system is designed correctly. Current LLMs can be constrained to reason within specific frameworks. They can be prompted to decompose a problem to first principles, or to search for analogies, or to map system dynamics. No single pass is as good as a world-class expert in that mode. But the ability to run five of them simultaneously, and then compare outputs? That’s something no human team does well, because of the cognitive limits described above. It’s a capability the technology has that we don’t.

    The harder problem (and this is where the real work is) is the layer that sits above those passes. The thing that evaluates whether a given line of reasoning is actually useful or just sounds rigorous. The thing that decides when to switch. The thing that synthesizes. That’s not a prompt. That’s an architecture.

    Thinking about thinking

    There’s a term for that architecture: meta-reasoning. It just means deciding how to think instead of just thinking. Which sounds like a philosophy seminar, but it’s actually pretty practical.

    You’ve done it. Everyone has. You’re an hour into analyzing something and you stop and think, “Wait, this is circular. What if instead of decomposing the problem, the better move is just to find someone who solved something similar?” That pause, the moment where you step outside the reasoning and evaluate the reasoning, that’s meta-reasoning.

    The problem is that we do it inconsistently, usually by accident, and almost never when the stakes are high. Under pressure, in a room full of people, with a deadline, the last thing anyone does is step back and say, “Hey, maybe we’re all thinking about this wrong.” It’s the cognitive equivalent of asking for directions. Everyone knows they should, and almost nobody does.

    Now, someone is going to read this and say: “That’s just a good facilitator. A skilled chief of staff or strategy lead does exactly this, reads the room, notices when the team is stuck in one mode, and redirects.” And that’s fair. Some people are genuinely excellent at this. The problem is that it’s rare, it’s expensive, and it doesn’t scale. That one brilliant facilitator can be in one room at a time. They have their own biases about which frameworks to reach for. They get tired. They have political constraints on what they can say. And no organization has enough of them to cover every decision that matters.

    The question is whether systems could do this more consistently. Not replacing that facilitator, but being available in the rooms where there isn’t one.

    What This Actually Looks Like

    Let’s make it concrete. Go back to the AI-in-product decision from the opening. The leadership team is stuck. Here’s what a meta-reasoning layer does with that problem.

    A first-principles pass decomposes it: what does it actually cost to build this capability, what’s the minimum viable version, and what assumptions about build-vs-buy are being treated as given that shouldn’t be? An analogical pass looks for structural parallels: which companies in adjacent industries embedded AI into their core product early, and what patterns emerge from the ones that succeeded versus failed? A systems pass maps second-order effects: how does this decision interact with the product roadmap, the engineering hiring plan, and the partnerships in flight? A design pass asks what the customer actually experiences: does this make the product better in a way they’d notice and pay for? And a critical pass stress-tests the lot: what has to be true for any of this to work, and where is the evidence thinnest?

    The interesting part isn’t any individual pass. It’s the controller that reads the outputs, notices where they contradict each other (and those contradictions are often the most valuable signal), and synthesizes them into something the team can actually use.

    That’s the meta-reasoning controller. Less a brain, more an orchestra conductor who doesn’t play an instrument but knows when the strings section has been going on too long.

    The honest caveat: the hard problem isn’t generating the individual passes. It’s evaluating whether the output is actually good. A first-principles decomposition that sounds rigorous but rests on a flawed assumption is worse than no decomposition at all. The controller has to catch that, and right now, that’s still more art than science. But “more art than science” is also how most strategic decision-making works today. The bar isn’t perfection. It’s whether the system produces better inputs than the team would generate on its own.

    The Evolution of Agent OS

    This is where Agent OS comes back in. The arc of these systems has a direction: Execution → Judgment → Decisioning. First-generation agents did tasks. Second-generation agents are learning to navigate judgment calls inside workflows. Third-generation agents will help orchestrate how people reason through hard problems in the first place.

    That third generation requires an architecture that looks something like this: framework-specific agents (or more accurately, framework-constrained reasoning passes) running in parallel, a meta-reasoning controller that monitors, evaluates, and synthesizes their outputs, and a reinforcement loop that learns over time which combinations of reasoning work for which kinds of problems. Not because someone wrote the rules, but because the patterns emerged from use.

    Agent OS needs to evolve from orchestrating work to orchestrating thinking. And unlike the facilitator who can be in one room at a time, this layer can sit behind every consequential decision in the organization. Consistently, without fatigue, and without the political constraints that keep smart people from saying “I think we’re approaching this wrong.”

    The Quiet Part

    If this happens (and most of the underlying components already exist, so “if” is doing less work than it might seem), it probably won’t be dramatic. Meta-reasoning won’t show up as a product launch. It’ll show up as a feature that offers a different perspective when a team has been stuck on the same slide for forty minutes. Or a nudge: “This has been treated as a systems problem for the last three meetings. What would a first-principles view look like?”

    Over time, those nudges accumulate. The organizations that build this into their intelligence layer will make differently shaped decisions. Not just faster, but drawing on a wider range of reasoning than any team could sustain on its own.

    The spreadsheet didn’t replace the analyst. The autopilot didn’t replace the pilot. Meta-reasoning won’t replace anyone’s judgment. It’ll just make visible the cognitive ruts we can’t see in ourselves, and open up lines of reasoning we weren’t going to find on our own.

    The future of AI probably isn’t better answers. It’s better ways of arriving at answers. Which starts, somewhat awkwardly, with building systems that think about thinking.

  • We need an Agent OS

    The opinions expressed in this article are my own and do not necessarily reflect those of my clients or employer.

    Every wave of technology arrives with a grand promise. For AI agents, that promise is simple: they will take work off our hands. These intelligent systems—designed to reason, plan, and act autonomously across applications—are supposed to finally bridge the gap between information and action. The demo videos are mesmerizing. We see agents booking travel, generating reports, and handling entire workflows end-to-end. But in the real world, agents are far better at helping us do the thing than doing the thing themselves.

    This is not simply a question of model capability or technical maturity. It is rooted in human psychology. Throughout history, humans have embraced tools that amplify our abilities, but we hesitate to adopt tools that remove our presence entirely. The spreadsheet did not replace the analyst; it made the analyst faster. Autopilot did not eliminate the pilot; it made flights safer. Even Google Search, the ultimate productivity amplifier, leaves the final click and decision to us. AI agents are colliding with the same truth: we are comfortable with co-pilots, but we are not yet ready for captains.

    The Bull and the Bear case for AI Agents

    The AI community today is split between those who believe agents are on the verge of revolutionizing work and those who see them as fragile experiments.

    The optimists, including researchers like Andrej Karpathy and many early enterprise adopters, imagine a near future where agents reliably orchestrate multi-step tasks, integrating across our apps and services to unlock massive productivity gains.

    The skeptics, from voices like Gary Marcus to enterprise CIOs burned by early pilots, argue that most agents are brittle, overconfident, and ultimately stuck in “demo-ware” mode—fun to watch, hard to trust.

    Both perspectives are correct. The bulls see where technology could go, while the bears are grounded in how organizations actually adopt and govern new capabilities. The tension between promise and practice is not about raw intelligence alone. It is about trust, reliability, and the invisible friction of human systems that are not yet designed for fully autonomous software.

    What This Means for Leaders and Builders

    For a business executive, the lesson is clear: staking a transformation on fragile autonomy is reckless. The path forward is to pilot agents as co-pilots embedded into existing workflows, where they can create value without assuming total control. Early wins will come not from tearing down processes but from enhancing the teams you already have.

    For a technology leader, the challenge is architectural. Building a future-proof approach means resisting the temptation to anchor on a single agent vendor or framework. Interoperability and observability will matter far more than flashy demos. Emerging standards for tool integration and context sharing will shape how agents scale safely, and the organizations that anticipate these shifts will be ready when the hype cycle swings back to reality.

    And for the builders—the developers crafting agents today—the priority is survival in a fragmented ecosystem. Agents that are resilient, auditable, and easy to integrate will outlast magical demos that break on first contact with messy enterprise data. Human-in-the-loop design is not a compromise; it is a feature that earns trust and keeps your agent relevant.

    The Realization: Agents Need a Home

    As I worked through these scenarios, one realization became inescapable: agents are not failing because they cannot think. They are failing because they have nowhere to live. They lack persistent context across tools and devices. They lack a reliable framework for orchestrating actions, recovering from errors, and escalating when things go wrong. And most critically, they lack the trust scaffolding—permissions, security, and auditability—that enterprises demand before they let any system truly act on their behalf.

    Right now, each agent is an island. They run in isolated apps or experimental sandboxes, disconnected from the unified memory and control that real work requires. This is why the leap from co-pilot to captain feels so distant.

    History offers a clue about how this might resolve. Every major paradigm in computing eventually required an interface and orchestration layer: PCs had Windows and macOS, smartphones had iOS and Android, and cloud had AWS and Azure. Each of these platforms provided not just functionality but a home—a trusted environment where applications could operate safely and coherently. AI agents will need the same thing.

    Call it, perhaps, an Operating System for Agents. We may not name it that in the market, but functionally, that is what must exist. The companies best positioned to build it are not the niche agent startups or the AI model providers, but the giants that already own our daily surfaces and our context: Microsoft, Apple, and perhaps Google. They control the calendars, files, messages, and permissions that agents will need to function meaningfully. Whoever creates the layer where agents can live, learn, and act—while keeping humans in control—will define the next era of AI.

    The Quiet Future Ahead

    If this plays out as history suggests, the rise of agents will not feel like an overnight revolution. It will feel like a quiet infiltration. Agents will first live as co-pilots within our operating systems, helping us with tasks at the edges of our attention. Over time, they will orchestrate more of our digital lives, bridging apps and services in the background. And finally, the interface itself will become the true home of agents—a layer we barely notice, because humans will still remain in the loop, just as we prefer.

    The agent future, then, is not about replacing us. It is about designing the world where they can live alongside us. Only when that world exists will the real revolution begin.

  • The Four Layers of AI: How Companies Are Actually Using It

    Okay, so I’ve been kicking around the whole AI thing for a while now, chatting with colleagues, clients, and partners from all corners of businesses – marketing gurus, the engineering brain trust, the suits upstairs, and the data wizards. Honestly, it’s been a bit of a whirlwind. Some people are chasing those immediate, shiny AI toys, while others are gazing way off into the future, dreaming up completely different business models.

    One thing’s for sure: there’s no magic AI button that works for everyone. But after all these conversations, a sort of pattern started to emerge in my head, a way to roughly categorize how AI is actually being used out there in the real world. Not the sci-fi stuff, or the deep technical nitty-gritty, but what companies are actually doing with it right now.

    Now, this isn’t some definitive truth, mind you. It’s just a way I’ve found to wrap my head around things, and it feels a little closer to reality than some of the more rigid frameworks I’ve seen floating around. I’m really focusing on the applied side of things – what’s getting built and put to work, not the theoretical research or the intricate details of how those models are trained.

    So, here’s how I’m currently thinking about it – four layers, starting with what seems like the easiest stuff and maybe moving towards the more potentially groundbreaking (though who really knows for sure, right?):

    1. AI for People:

    This feels like the most accessible entry point. Think about those tools that slot right into how we already work. Copilot helping you not sound like a total mess in emails or summarizing those endless meeting transcripts, or ChatGPT / Claude / Gemini (most of the times) actually answering your questions. The cool thing here is you don’t have to rip up your existing systems or change your whole workflow. You’re just giving your team some potentially smarter tools. Seems like a relatively low-stakes way to dip your toes in and maybe see some quicker wins.

    2. AI on Platforms:

    This is where AI starts to get baked into the software we’re already using day-to-day. Think about your CRM like Salesforce or HubSpot suddenly being a bit more insightful – maybe it’s prioritizing leads in a smarter way, suggesting your next best action, or spotting trends in your data you might have missed. A lot of this isn’t stuff you’re building from scratch; it’s coming from the vendors of these platforms. Still, it can quietly have a pretty significant impact on how things get done.

    3. AI in Processes:

    Now we’re starting to get into territory where companies are looking at their core operations and thinking about how AI could fit in. Automating some of those repetitive steps, making decisions a little faster, maybe even improving accuracy in certain areas. This usually means rolling up your sleeves a bit more – probably some custom development, definitely needing to tap into your own company data, and really thinking through how these changes will affect people and processes. The potential upside feels bigger here, but so does the effort involved.

    4. AI-Native Products:

    This feels like the wild frontier, the stuff that maybe wouldn’t even exist without AI at its core. Think about products that can generate content from scratch, make predictions that feel almost uncanny, or learn and adapt based on how people actually use them. These aren’t just souped-up versions of old products; they’re potentially completely new kinds of experiences. Definitely feels riskier to build and launch, but maybe these are the ones that could really shake things up in the long run.

    As you sort of mentally climb these layers, it feels like you’re generally shifting from:

    • Looking for those quick, noticeable improvements to thinking about longer-term, more strategic plays.
    • Grabbing off-the-shelf solutions to needing to build more tailored, custom approaches.
    • Trying to make the existing stuff a little better to venturing into creating entirely new things.
    • Relying on what we’re pretty sure works to exploring what might work, with a bit more uncertainty involved.

    It’s interesting to note that most companies probably won’t march through these layers in some neat, orderly fashion, and honestly, that’s probably okay. Maybe the real value isn’t about ticking off all the boxes, but more about having a sense of where you are right now and where you might want to focus your energy next.

    If you’re in the thick of trying to figure out what AI means for your company, maybe this rough breakdown can be a helpful way to cut through some of the hype and make a little more sense of it all. At least, that’s what I’m hoping.

  • How AI will embed into enterprises

    As enterprises try to experiment, build, and scale for/with AI (including and especially with Gen AI), there are four clear areas where the impact is going to be:

    1. People: While is it universally accepted that your workforce will “on average” become smarter using technology”, at the same time, two other things can be true as well. 1/ From an HBR article, “If one universal law regarding the adoption of new technologies existed, it would be this: People will use digital tools in ways you can’t fully anticipate or control.” And 2/ getting inspired from a famous source, Uncertainty principle, I extrapolate that “enterprises can’t determine with precision their velocity and momentum of AI adoption“. By enterprises I mean large companies with many functions or departments, and at least seven levels of hierarchy. At those levels of scale and complexity, it is hard to get precise information and metrics about AI usage, while simultaneously knowing whether it is useful or not.

      How do I use this information? Think of employees as users or customers. At every stage of AI enablement, talk to them. Ask questions. Understand what they can, should, and would do. Compare expected behaviors against motivations and incentives to anticipate true behaviors.

      Next up… Processes
  • Meet tisit: Your Second Brain, Powered by Generative AI

    Ever Felt Overwhelmed by Information? tisit Organizes Your Knowledge and Suggests What You Might Want to Know Next

    Hello, dear readers!

    In today’s digital age, information is abundant, but organizing it meaningfully is a challenge. How many times have you encountered a fascinating fact or pondered a question, only to lose track of it later? This dilemma is precisely what tisit aims to solve.

    What is tisit?

    tisit—short for “what is it?”—is not just another tool; it’s your personal learning machine. Think of it as a treasure chest for your thoughts, questions, and ideas that not only stores them but actively helps you make sense of them.

    The Problem We All Face

    We often come across exciting ideas or questions that tickle our curiosity. But storing and revisiting this information in a way that adds value to our lives is not easy. Traditional methods like bookmarking or note-taking don’t capture the dynamic nature of learning.

    How tisit Provides a Solution

    Here’s where tisit’s true power comes into play. It uses state-of-the-art artificial intelligence to not only capture your questions and curiosities but to provide well-rounded answers to them. tisit organizes this information neatly, serving as your “second brain” where you can save, nurture, and revisit your intellectual pursuits.

    Taking it a Step Further: Intelligent Recommendations

    But tisit doesn’t stop there. What sets it apart is its ability to make intelligent connections between what you already know and what you might find interesting. It offers recommendations that help you dive deeper into subjects you love, and even suggests new areas that could broaden your horizons. Imagine a system that evolves with you, continuously enriching your knowledge base.

    Want to be Part of Something Big?

    As we put the final touches on tisit, we’re looking for alpha and beta testers to help us make it the best it can be. If you’re passionate about learning and personal growth, we invite you to be a part of this transformative experience.

    Stay Tuned

    For more updates on tisit and how you can be involved in its exciting journey, subscribe to this blog. We promise to keep you informed every step of the way.

  • No skill left behind

    Jeff Bezos is famous for Amazon’s focus on things that aren’t going to change in the next 10 years.

    “I very frequently get the question: ‘What’s going to change in the next 10 years?’ And that is a very interesting question; it’s a very common one. I almost never get the question: ‘What’s not going to change in the next 10 years?’

    And I submit to you that that second question is actually the more important of the two — because you can build a business strategy around the things that are stable in time … In our retail business, we know that customers want low prices, and I know that’s going to be true 10 years from now. They want fast delivery; they want vast selection.

    It’s impossible to imagine a future 10 years from now where a customer comes up and says, ‘Jeff I love Amazon; I just wish the prices were a little higher,’ [or] ‘I love Amazon; I just wish you’d deliver a little more slowly.’ Impossible. […] When you have something that you know is true, even over the long term, you can afford to put a lot of energy into it.”

    From the article: https://www.ideatovalue.com/lead/nickskillicorn/2021/02/jeff-bezos-rule-what-will-not-change/

    Original video can be found here: https://youtu.be/O4MtQGRIIuA?t=267

    I have been analyzing data since 2004 – basically all my professional life. I have seen the tools people use change (from Spreadsheets and calculators to reporting tools and infographics, to even more fancy BI tools and slick dashboards. At the same time, the methods we use to analyze data have changed from basic statistics (total, average, min, and max), to advanced statistics (distributions and variance), to advanced statistics, mathematics, and computer science (forecasting, predictions, and anomaly/pattern detection).

    While the methods and tools to process and present that data have changed significantly over the years, there is one core skill that is increasingly becoming more important in terms of our ability to make meaning out of data.

    That skill is the ability to do analysis, and the role is of a business or quantitative analyst. I know it sounds basic – almost too simple to write about. And therein lies the key reason why many enterprises fail to use data properly within their organizations. Yes, I admit that there are other roadblocks in a firm’s ability to use data – such as access to data, understanding the meaning and acceptable use of data, having the tools and methods to process the data, and finally, the ability (and time) to interpret the meaning of data. I contest that many of these challenges are symptoms of a bigger problem – the lack of a full-time role to analyze and make meaning out of data.

    As we shifted to “Big Data” around 2010s, companies quickly pivoted to creating new roles to follow the trends. Hiring data scientists was a top-5 priority for almost any organization. Then came the realization that good data science requires data that is both broad in its sources, and deep in its volume. This followed the move to data lakes to bring all the data into one place. Many companies invested in large Hadoop-based infrastructures, only to find that once data is in one place, it’s almost too much to make sense of, and very expensive to maintain. Cataloging, searching, and distributing data proved to be a big IT overhead. With the advent of the cloud, suddenly there was a new way to solve the problem – migrate to the cloud, uncouple storage and processing, and save a ton on your overheads. The move to the cloud has proven moderately challenging for many customers, and frustratingly complicated for others. The skills to build, maintain, and evolve complex architecture is a full-time job that modern technology organizations should focus on 100%.

    What has not changed during this time? It is still a person’s ability to ask business questions, identify the data needed to answer those questions, apply the stat/math/cs methods to process the data, and present the information in a consumable manner. That’s the job of a data analyst, and I argue we need more of them – that is a skill that does not scale non-linearly. At least, not yet.

  • The simplest data model you can create

    A person can spend years, and make a career out of designing and building Data Models. There are generic data models, industry-specific ones, and then there are functional data models. Each with its own sets of signs and symbols, sometimes its own language, and certainly, its own learning curve.

    Here is an example from IBM, ServiceNow, and Teradata. SaaS companies publish their data model blueprints so that enterprises can map their internal systems to a “standard”, which makes migration/integration easy. Industry-focused institutions (such as TM Forum) publish their version to focus on industry-specific nuances and use a bunch of jargon. And finally, there are ontologies.

    The simplest way to understand data model vs. ontologies is to think specific vs. generic. Data models are often application-specific (industry or software applications; hence TM Forum and SaaS providers publish their data models). IMO, Ontologies are meant to help understand what exists and how it is related, without drilling into every specific detail (although many have gone that route too). They allow for a scalable understanding of a concept, without explaining the concept in detail.

    Here, I attempt to represent the simplest data model one can draw for an enterprise. It probably applies to 80% of situations and to B2B, B2C, B2B2C, or D2C business models.

    Minimalist Enterprise Data Model

    I think that any enterprise process can be represented as a combination of one of the more basic entities (asset, product, customer, workforce, and systems). Take customer service for example. A customer tries to pay their bill online (customer -> systems). The transaction fails. The customer then calls into a contact center, talks to an employee, complains about the issue, and eventually resolves the issue. (customer -> workforce -> system).

    How would this picture work for a multi-sided platform company (because everyone is a platform company, just like everyone was a technology company). In a multi-sided platform, your producers/suppliers are your consumers too. Unless you treat them like customers, serve them as customers, and retain them as customers, you won’t have a thing to sell on your platform.

    Now the trick is to expand this picture as needed. But let us save that for another post.

  • The essence of Artificial Intelligence is time travel

    One of my favorite writers is Matt Levin. He writes for Bloomberg, and has this natural (and funny) ability to explain complex financial concepts to lay-people like me. Check out his (mostly) daily emails and various podcast appearances (here and here) where he explains is writing process. Matt, very famously, exclaimed(?) in his blog post:

    The essence of finance is time travel. Saving is about moving resources from the present into the future; financing is about moving resources from the future back into the present. Stock prices reflect cash flows into an infinite future; a long-term interest rate contains within it predictions about a whole series of future short-term interest rates.

    The essence of AI (and any data-driven insight or decision) is time travel.

    In a sense we time travel when we use data to understand what happened (describe), to explain why it happened (diagnose), to predict what will happen (predict), and to plan against that prediction (prescribe).

    The size of data does not matter – it could be very small samples (i.e. from personal experiences) or “Big” data (i.e. a million user click-throughs). As long as we used the data that exists, to inform our knowledge, we’re doing time travel. Isn’t that exciting?

    When presenting last quarters sales figures, we often describe the circumstances and results, as if they were unfolding in that very instance. Similarly, when we predict a sales forecast (using a simple regression technique, or an advanced neural net model, we’re using historical data and projecting it out in the future (with some assumptions that are expected to remain valid in the future).

    So just to recap 2021, it was a breakthrough year for Tesla and for electric vehicles in general. And while we battled, and everyone did, with supply chain challenges through the year, we managed to grow our volumes by nearly 90% last year. This level of growth didn’t happen by coincidence. It was a result of ingenuity and hard work across multiple teams throughout the company. Additionally, we reached the highest operating margin in the industry in the last widely reported quarter at over 14% GAAP operating margin. Lastly, thanks to $5.5 billion of [Inaudible] small finger by now — $5.5 billion of GAAP net income in 2021, our accumulated profitability since the inception of the company became positive, which I think makes us a real company at this point. This is a critical milestone for the company.

    Elon Musk — Tesla CEO from Q4’21 Earnings Call Transcript

    As it would be true with time travel, we must remember that moving back and forth in time is not only a little difficult, but we must accommodate for external situations (macro variables) which may have not been captured in the data. See, in the AI modeling world, we like to emphasize on data cleanliness, completeness, and consistency. A model in a computer is much cleaner and simple than the real world because our focus is on building a highly accurate model. But in that endeavor we often lose the details which we believe, at the time, to be less useful. Such as anomalous (very high or low values which seem to skew our results or exaggerate certain results) or missing values. It is very convenient to filter out rows and columns than to dive into another rabbit hole (of why the data is missing… what might have happened to the data – which is another time travel dimension).

    In fact, this may be the reason why most data scientists spend less time doing EDA, or exploratory data analysis, than they should. Everyone’s looking for what the model says, so let’s give them an answer… how we arrived at the answer is a conversation for later (which we often don’t get to… until it’s too late).

  • Hello World!

    Welcome to my blog about all things data. This blog is really meant to be a place where I think clearly about the topic, by writing about it.

    If you happen to visit this blog, appreciate any feedback, questions, or inputs you have on the topic.

    Some things to make clear before I start writing:

    1. About the blog title: I’m shamelessly inspired from one of my absolute favorite books/writer The Psychology of Money by Morgan Housel. So I apologize in advance if it hurts his or others’ feelings (that I did that). Like the book (and the original blog), it is less about sharing new insights or silver bullets. To me, this blog is more about sharing what has not changed, and remains relevant to people who want to use data in decision making.
    2. About the content: I’ll mostly be synthesizing opinions and experiences from my career, as well as books/blogs/content from others who know and have done better. It’s about connecting the dots as I see it.
    3. About the format: it will be mostly open ended, long form, and without a specific flow. I’ll write about data-related topics as they develop in my mind, but this is not a book with a sequence of chapters to read from.

    See you next time!