Compared to men, women are 47% more likely to be seriously injured and 17% more likely to die if they’re in a car crash. Why?
In a word, bias. For decades, the crash test dummy was only modeled after a 50th percentile male. And even now, it’s still the most commonly used dummy.
In some ways, this shouldn’t be surprising: after all, everything is created with bias, whether or not we even realize it. And yes, this is still the case even when we’re the most well-intentioned.
And then there’s technology. Historically, technology has been seen as a tool to help humans make better, more objective or impartial decisions, and AI has been the (fickle) darling of this ethos. But with so many AI blunders (like the failings of Apple Card and facial recognition), we need to remember that the undeniable promise of AI cannot overshadow its shortcomings.
Take a recent example: well-respected market intelligence firm CB Insights recently launched Management Mosaic, which uses an algorithm to score the “quality” of a startup’s founding and management team to accelerate purchasing, investment, and M&A decisions. CB Insights boasts the tool as an “objective, data-driven” way to find the “best” startups, using data points such as past employers, the milestones those companies achieved while the founder/team members worked there, network quality based on other people they’re likely to know, and educational institutions.
Meanwhile, there’s a reason legacy admissions are being eradicated in higher ed, most networks suffer from affinity bias, and many, many underperforming companies have incredible employees who are stymied by inept executives (and vice versa).
And remember, less than 1% of VC dollars go to Black female entrepreneurs, and as of June 2021, less than 20% of total VC deals went to startups with at least one female founder. And these are actually statistical improvements from even a couple of years ago. Inarguably, the past (and status quo) of the startup world cannot be a blueprint for the future, and therefore relying on historical data to train these models may bake in the very biases CB Insights is trying to counteract. While CB Insights CEO Anand Sanwal points out that Mosaic’s algorithm does not look at demographics as “inputs,” this does not mean that the model will therefore be unbiased. This is because many variables serve as proxies for protected classes, which can often lead to disparate impact. This is a classic example of how even “good” data can produce “bad” outputs.
What’s more, one of the key fuels of startup success is the ability to raise capital – especially early on when there is less “proof” that your idea is a winning one. Still, a 2020 Docusend study found that VCs spend 18% more time on pitch decks of all-male team than an all-female one. Why? Maybe because white men control 93% of venture capital dollars. So since many qualified founders have been – and still are – denied equitable opportunities at early stage funding, we see less of those founders hitting unicorn status or ringing the bell. Thus this bias becomes a self-fulfilling prophecy.
Still, while AI ethicists everywhere question the potential harm of Mosaic, CB Insights stands by their tool’s predictive accuracy. This scenario illustrates a key dilemma in AI ethics: what, exactly, is ethical?
FICO’s 2021 “The State of Responsible AI” report found that there is no consensus on what a company’s AI responsibilities should be, especially in use cases where AI’s potential for harm is not directly linked to human fatalities. Meanwhile, financial standing is a proven and powerful social determinant of health – so where does that leave fintech?
While regulatory bodies work to create a more universal definition of AI ethics, at Stratyfy we think a lot about the foundational requirements needed to ensure a responsible AI baseline in fintech.
First, think outside the technology itself – who is building it? Strive for diverse teams (I’m not one to cater to the argument that there needs to be a business case for everything, but we do know diverse teams yield better results).
Second, before you build an MVP, build an EVP: an ethically viable product (more on that here) and ensure that AI ethics aren’t just an add-on or after thought.
Third, be critical of AI – do you need it? The model should match the problem you’re trying to solve, and the fact is that not all business problems need machine learning. Start with the business problem.

Lastly, if you are going to use AI, make it explainable. The AI must be transparent – and at Stratyfy, when we say that, we mean it. In order to explain decisions to our customers and mitigate potential bias, we must first be able to view (and understand) how the AI is making decisions. And there’s lots of work to be done there: according to FICO, just 35% of teams can explain how specific decisions of predictions are made.
Stratyfy was founded to help solve a specific and pervasive problem: unequal access to credit. Our goal is to utilize unique technology that brings the human element to AI. We are passionate about our mission, and becoming a mother introduced a new sense of urgency for me to drive this change. I look at my son and I think about the world I want him to live in—and how it differs from the world today.
AI must be flexible and controllable – ie, allow for human input and intervention – especially for highly variable sectors.
Take Stratyfy’s Probabilistic Rules Engine (PRE for short) for example. Unlike classical rule-based systems with hard (deterministic) rules, rules in PRE can express soft statements and combine knowledge from a variety of sources, much like how humans make decisions. This makes PRE models easier to use and transparent while still offering powerful predictions, with its accuracy on par with black box approaches.
At the risk of presenting an oversimplified solution, I’ll close by reiterating that I know this isn’t easy. AI ethics make up a philosophical question that sits at the intersection of human and machine, and it asks us to view human fallibility in an entirely new light. But while I know bias can never be truly erased, we can work to design technology in ways that mitigate it. We talk a lot about how AI (and technology broadly) can impact human life, but we need to also hold each other accountable for the ways that we, as humans, can impact technology. Bias is a good place to start.