Analysis of ML engineering lifecycles, common pitfalls, and a copy-and-paste template you can use.

Image source: Christopher Lague on pixy

Machine learning engineering is hard, especially when developing products at high velocity (as is the case for us at Abnormal Security). Typical software engineering lifecycles often fail when developing ML systems.

How often have you, or someone on your team, fallen into the endless ML experimentation twiddling paralysis? Found ML projects taking two or three times as long as expected? Pivoted from an elegant ML solution to something simple and limited to ship on time? If you answered yes to any of these questions, this article may be right for you.

Purpose of this article:

  1. Analyze why software engineering lifecycles…


Developing a machine learning product for cybersecurity comes with unique challenges. For a bit of background, Abnormal Security’s products prevent email attacks (think phishing, business email compromise, malware, etc.) and also identify accounts that have been taken over. These attacks are clever social engineering attempts launched to steal money (sometimes in the millions) or gain access to an organization for financial theft or espionage.

Detecting attacks is hard! We’re dealing with rare events: as low as 1 in 10 million messages or sign-ins. The data is high dimensional: all the free-form content in an email and linked or attached in…


At the core of all Abnormal’s detection products sits a sophisticated web of prediction models. For any of these models to function we need deep and thoughtfully engineered features, careful modeling of sub-problems, and the ability to join data from a set of databases.

For example, one type of email attack we detect is called Business Email Compromise (BEC). A common BEC attack is a “VIP impersonation” in which the attacker pretends to be the CEO or other VIP in a company in order to convince an employee to take some action. …


Authors: Jeshua Bratman and Vineet Edupuganti

Our core email attack detection product at Abnormal works by processing each incoming message, applying a series of classification models, and ultimately deciding if a message might be an attack. This detection system runs in an online distributed system processing millions of messages per day.

Rescoring

One key component in our pipeline is called rescoring. This data pipeline loads historical examples of email attacks in order to evaluate the accuracy of our detection system with respect to historical attacks. …


My favorite part of working at Abnormal Security is seeing the myriad of nefarious attacks we are able to stop. These attacks include everything from attempts to steal millions of dollars, to installing ransomware crippling hospitals, to state actors compromising our power grid. And — right the core of the product sits some tough ML problems. How do we robustly identify behavior anomalies? How do we quickly adapt to an ever-changing attack landscape? How do we catch these really carefully crafted social engineering strategies aimed to trick people?

Just before the election, an attack went out to thousands of voters…


This article is a follow-up to one I wrote a year ago — Lessons from building AI to Stop Cyberattacks — in which I discussed the overall problem of detecting social engineering attacks using ML techniques and our general solution at Abnormal. This post aims to walk through the process we use at Abnormal to model various aspects of a given email and ultimately detect and block attacks.

As discussed in the previous post, sophisticated social engineering email attacks are on the rise and getting more advanced every day. They prey on the trust we put in our business tools…


Successful Phishing attack on John Podesta that lead to the 2016 DNC email leaks. Released by WIkiLeaks.

On March 19th 2016 John Podesta, was tricked into revealing his Gmail credentials to a Russian-backed organization who then released emails regarding the Clinton campaign, effectively influencing the 2016 election.

Social engineering attacks like this (and much more sophisticated ones) are on the rise and getting more advanced every day. They prey on the trust we put in our business tools and social networks, especially when a message appears to be from someone on our contact list (but is not) or even more insidiously when the attack is actually from a contact whose account has been compromised. …


I’m not a Russian agent. I’m not planning to attack a US election (sounds like a glamorous job, but it’s not for me). But if I were, and I worked in the disinformation division, I’d have my GPUs spinning on the freshest Deepfakes money could buy.

I’d be impersonating the Democratic hopefuls, the squad, and who-knows-who else. The videos will be carefully crafted to damage these politicians, lose them votes and discredit the Democratic party in general.

We’ll have a DeepFake showing Elizabeth Warren calling American factory workers “a thing of the past”. We’ll have a snappy video of Ilhan…


With class-imbalanced ML problems, it’s often convenient to subsample the negative examples to speed up data pipelines or training jobs. For example, one of our problems at Abnormal Security is to classify rare social engineering and phishing email attacks which occur at a rate of between 0.1% and 0.001% of all messages volume. Because of this class imbalance we want to include every single positive example (attacks) but only a portion of negative examples (safe emails).

For example:
Real Distribution — 100mil negatives, 10k positives
Training Distribution — 10mil negatives (10% subsample), 10k positives

When we train a model on…


Week 3 of the iO intensive focused on scenework, particularly grounded two-person scenes and on faster second-beat scenes.

(scroll to the bottom for 6 weird tricks to improve your scenework)

Our teacher was Jason Shotts who is incredibly skilled at playing patient grounded scenes (TAKE A CLASS FROM HIM IF YOU CAN!). He taught at iO Chicago for years and recently moved to LA where he teaches and plays with a variety of groups at iO West including the charming duo: Dummy. Many of the notes in this blog are direct quotes from Jason.

This week was the most memorable…

Jeshua Bratman

Founding engineer and Head of ML at Abnormal Security. I write about AI, ML, Data Science, and Cyber Security mixed with some comedy

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store