Monday, June 30, 2014

i wouldn't bet on it

last year cryptography professor matthew green made a bet with mikko hypponen that by the 15th of this month there would be a snowden doc released that showed that US AV companies collaborated with the NSA. he has since accepted that he lost the bet to mikko, but should he have?

i mentioned to matthew the case of mcafee being in bed with government malware writing firm hbgary and mikko chimed in that hbgary wasn't an AV company and being partners with them wasn't enough to win the bet. aside from the fact that this is the first time after all these years that i've seen a member of the AV industry publicly comment on the relationship between mcafee and hbgary (i guess managing matthew's perception of AV is more important than managing mine), something about mikko's response rang hollow.

one way to interpret the situation with hbgary is to view them as government contractors whom mcafee endorsed, advertised, and helped get their code onto the systems of mcafee's customers (hbgary makes a technology that integrates with mcafee's endpoint security product). that certainly would have given hbgary access to systems and organizations they might have had difficulty getting otherwise. i have no idea if that access was ever used in an offensive way, though, so this line of thought is a little iffy.

another way to interpret the situation is to directly contradict mikko and admit that hbgary is a member of the AV industry. after all, they make and sell technology that integrates into an endpoint security product. they may only be on the fringe of the industry, but what more do you have to do to be a member of the industry than make and sell technology for fighting malware? the fact that they also made malware for the government makes them essentially a US AV company that collaborated with the government in one of the worst ways possible.

i feel like this should be enough to have won matthew green the bet, at least in spirit, but the letter of the bet was apparently that a snowden doc would reveal it and the revelation about mcafee and hbgary actually predates snowden's leaks by a number of years. 

so, the question becomes are there any companies that happen to be members of the AV industry and also happen to have been fingered by a snowden leak? it turns out there was (at least) one. they were probably forgotten because they're not just an AV vendor, but AV vendor does happen to be one of the many hats that microsoft wears (plenty of security experts were even advising people to drop their paid-for AV in favour of microsoft's offering at one point in time), and microsoft was most certainly fingered by snowden docs. the instances where microsoft helped the government may not have involved their anti-malware department, but the fact remains that a company that is a member of the AV industry was revealed by snowden documents to have collaborated with the government.

i imagine mikko could find a way to argue this doesn't count either - i admit it's not air-tight - but given how close it meets both the spirit and (as i understand it) the letter of the bet, i think mikko should match the sum he had matthew pay to the EFF and pay it to an organization of matthew's choosing. i won't bet on that happening, though.

Saturday, June 14, 2014

confessions of a twitter worm victim

as some of you may know, this past wednesday someone released a self-retweeting worm on twitter that exploited an XSS vulnerability in the popular twitter client tweetdeck. i happen to be a tweetdeck user and i got hit by the worm, not once but twice. since i believe in owning up to my mistakes in order to serve as an example to others, i figured it was important for me to write this post.

this isn't the first time i've had to do this. four years ago it was discovered that there had been a RAT bundled with the software for a USB battery charger sold by the energizer battery company (it had gone undetected by the community for years) and i wrote about my experience then as well.

this was the first time getting hit with something that could spread to others, and spread it did. i know this because i got email notifications from twitter when other people's tweetdeck clients automatically retweeted the tweet that that my client automatically retweeted. that's actually one of the things i think i did right - i have twitter setup to send me notifications for as much of that kind of activity as i possibly can. the result is that i get what is essentially an activity log sent to my email in near real-time and that alerted me to the problem within minutes of it occurring.

that quick notification allowed me to undo the retweet before it propagated from my account again. that limited the extent to which i contributed to the spread of the worm. acting quickly to neutralize the threat in my twitter stream is another thing i believe i did right.

unfortunately i also did a number of things wrong. for example, i knew about the XSS vulnerability before i encountered the worm, i saw excellent preventative advice and even retweeted that, but i failed to follow it exactly. the advice was to sign out of tweetdeck and then de-authorize the app in twitter. what i did instead was close the tweetdeck tab in my browser and de-authorize the app. i took a shortcut because i didn't believe anyone i followed would actually tweet anything malicious. i didn't anticipate that they might do so involuntarily - the possibility of something like the samy worm from years past never occurred to me. and so when news spread that the vulnerability had been fix and that users needed to log out and back in again to apply the fix i re-opened the tab, re-authorized the app (because that was the first prompt i was presented with) and then went hunting for the logout button. that's when i got the email notification that another user had retweeted one of my retweets.

however, i did not see the alert popup that was supposed to indicate the worm had executed. i didn't realize it at the time but that was important because it meant there was more going on than i realized. it meant that the worm had not executed in the client i was sitting in front of. what i had forgotten was that i had another tweetdeck client open on a computer at work and when i re-authorized the app the worm executed on the work computer rather than my home computer. it wasn't until i was on a bus to see an old friend that the significance of what had (and had not) happened clicked and then it wasn't for another several hours before i could get access to that work computer (where the alert popup was still patiently waiting for me) in order to log out and back into tweetdeck again, which i did without de-authorizing the app beforehand so the un-retweeted tweet got re-retweeted.

in short it was a comedy of errors.

what i've taken away from this is a number of things:

  1. i am once again humbled by the clear demonstration that i am not perfect. while i certainly knew conceptually that i wasn't perfect, i have had a surprisingly good track record with malware. having my ass handed to me made the appreciation of my imperfection much more visceral.
  2. i've gained a better appreciation for the value of de-authorizing apps in twitter. to a certain extent it can seem kind of abstract but what it's actually doing is isolating a vulnerable component from the rest of the network not unlike pulling the network cable out of an infected computer did back when worms that enumerated network shares or sent mass emails were prevalent.
  3. i've identified my failure to log out of things (not just tweetdeck but all sorts of sites) as a bad habit. it's pure laziness and it's not even rational laziness because there's almost no effort involved in logging in when you use a password manager. part of the reason i didn't post this sooner is because i wanted to see if breaking this habit was a reasonable expectation or whether saying i was going to improve was just wishful thinking. so far this improvement seems like an entirely reasonable expectation - i've had no problems logging out of things when i don't need the session open any longer.
at the end of the day, improvement is what sets an incident apart from a failure. the only real failure is a failure to learn from your mistakes and do better the next time. i'm not perfect (no one is) but each time i screw up i make sure i get better.

Tuesday, May 06, 2014

symantec anti-virus is dead

there's a lot of digital ink getting spilled right now over symantec's brian dye saying that anti-virus is dead (one, two, three, four, five, and more to come i'm sure), but i don't see many people asking the tough question, which is "why should we believe symantec now"?

looking back over my past posts about symantec paints a pretty unappealing picture, and reveals what might be considered a pattern. virtually right from the beginning they named their consumer anti-virus product after a man who famously said computer viruses were an urban legend. then, when they then tried to reinvent themselves with their "security 2.0" campaign, they claimed the virus problem was solved. now, when it appears they're trying to reinvent themselves again, they're saying that anti-virus is dead. it seems that whenever their business plan calls for serious marketing, they latch on to messages that grab attention but whose reality is questionable at best.

when the biggest anti-virus vendor starts saying anti-virus is dead, there's no way that isn't going to grab a lot of attention. it seems designed to hurt the very industry they're on top of, while they are (apparently) in the process of trying to distance themselves from it. i've noted in the past that the biggest players in the industry are hurt the least by the consequences of their bad acts. as market leaders they control perception not just of themselves but of the entire industry, so that even if a smaller player wanted to try to present a more reasonable and accurate view of things in order to better compete on technical merit rather than deceptive marketing manipulation, there's very little impact they could have. saying that anti-virus is dead while simultaneously trying to position themselves as something else is essentially a scorched earth tactic.it will hurt the entire anti-virus industry while drawing attention to the alternate industry they're trying to create/break into.

when the biggest anti-virus vendor starts saying anti-virus is dead, there's also no way that shouldn't raise the hairs on the back of your neck. out of the blue symantec starts mimicking exactly the same message that enterprise level infosec people have been saying for years? am i the only one who thinks that sounds like it belongs in the too good to be true category? this is the same kind of technique a malware writer might use to trick you into trying out his/her handiwork. before you get any ideas about symantec using 'trojan marketing', though, it's also the same kind of technique AV marketers used when they told people just using AV would solve their security problems. too good to be true has been part of the AV marketing arsenal from the very beginning, it's just that this new one about AV being dead seems to be designed for a much more select class of dupe, i mean user. this is the same shit, it's just a different pile.

it'll probably work, though. telling people what they want to hear is unfortunately quite effective. even smart people will fall for it, because despite being smart, those people still want to hear something that is far too simplistic to have anything in common with reality. when you look closely enough, the truth always seems to wind up being messy and complicated, not something that could fit in a sound-bite.

this is the reason why i try to convince people to stop listening to marketing (and really, everything that comes out of a vendor is marketing to some degree). this is almost certainly nothing more than another in a long line of efforts to deceive and manipulate the market. if you must listen to something, listen to their actions. they aren't retiring their AV product, so how dead can AV really be?

all that being said, i actually do welcome their shift in focus from purely prevention to now include more detection and recovery. it's about time AV vendors started getting serious about the last 2 parts of the PDR triad (prevention, detection, recovery). it doesn't have to be purely service-based detection though. years ago we had generic detection tools (such as integrity checkers) that end users could use themselves. symantec's focus on providing detection services instead of detection tools belies a philosophy of not trusting the users' competence, which in turn is consistent with their long history of failing to educate, elevate, and empower their users. maybe that kind of paternalism is appropriate for home users, but enterprise security operations? i thought we could expect enterprise level IT and infosec professionals to develop skills and expertise in these kinds of areas, so why is symantec choosing a path that takes these things out of advanced customers' hands?

as much as it seems like symantec is doing an about-face, they really haven't changed their tune. telling enterprises what they want to hear is just a ploy so that enterprises will get in bed with them (that's just what we call pillow talk, baby). they still aren't giving their users any new power to affect their own security outcomes. so far they're just offering words. nothing but sweet, sweet words that turn into bitter orange wax in your ears.

Friday, April 04, 2014

goto fail refactored

i wrote the lion's share of this a while ago but wasn't sure i wanted to publish yet another post about GOTO here since this isn't a programming blog. my mind was made up yesterday when i read this post by Steven J Vaughan-Nichols where he quotes a number of technology personalities essentially giving bullshit excuses for why GOTO is OK to use. it's no wonder 2 separate crypto libraries (both making prodigious use of GOTO) suffered embarrassing and dangerous defects recently when programming thought leaders perpetuate myths about structured programming.

i'm providing this as an object lesson in how to avoid the use of GOTO, especially in security-related code where a higher standard of quality is sorely needed. i'll be using Apple's Goto Fail bug as the example. here is the complete function where the fail was found, with the bug intact:

static OSStatus SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen)
{
    OSStatus        err;
    SSLBuffer       hashOut, hashCtx, clientRandom, serverRandom;
    uint8_t         hashes[SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN];
    SSLBuffer       signedHashes;
    uint8_t         *dataToSign;
    size_t          dataToSignLen;

    signedHashes.data = 0;
    hashCtx.data = 0;

    clientRandom.data = ctx->clientRandom;
    clientRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;
    serverRandom.data = ctx->serverRandom;
    serverRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;


    if(isRsa) {
        /* skip this if signing with DSA */
        dataToSign = hashes;
        dataToSignLen = SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN;
        hashOut.data = hashes;
        hashOut.length = SSL_MD5_DIGEST_LEN;
        
        if ((err = ReadyHash(&SSLHashMD5, &hashCtx)) != 0)
            goto fail;
        if ((err = SSLHashMD5.update(&hashCtx, &clientRandom)) != 0)
            goto fail;
        if ((err = SSLHashMD5.update(&hashCtx, &serverRandom)) != 0)
            goto fail;
        if ((err = SSLHashMD5.update(&hashCtx, &signedParams)) != 0)
            goto fail;
        if ((err = SSLHashMD5.final(&hashCtx, &hashOut)) != 0)
            goto fail;
    }
    else {
        /* DSA, ECDSA - just use the SHA1 hash */
        dataToSign = &hashes[SSL_MD5_DIGEST_LEN];
        dataToSignLen = SSL_SHA1_DIGEST_LEN;
    }

    hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
    hashOut.length = SSL_SHA1_DIGEST_LEN;
    if ((err = SSLFreeBuffer(&hashCtx)) != 0)
        goto fail;

    if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
        goto fail;
    if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
        goto fail;
        goto fail;
    if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
        goto fail;

    err = sslRawVerify(ctx,
                       ctx->peerPubKey,
                       dataToSign,                /* plaintext */
                       dataToSignLen,            /* plaintext length */
                       signature,
                       signatureLen);
    if(err) {
        sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                    "returned %d\n", (int)err);
        goto fail;
    }

fail:
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;



one of the things you might notice is that all roads lead to "fail:", meaning "fail:" isn't really just for failures, it's for clean-up.

another thing you might notice is that the final "goto fail;" doesn't actually bypass any code - it's completely redundant and if it weren't there the next thing to execute would still be the code after the "fail:" label.

the first thing we're going to try is the most obvious approach to refactoring this function, to get rid of GOTO by making proper use of IF.

static OSStatus SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen)
{
    OSStatus        err;
    SSLBuffer       hashOut, hashCtx, clientRandom, serverRandom;
    uint8_t         hashes[SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN];
    SSLBuffer       signedHashes;
    uint8_t         *dataToSign;
    size_t          dataToSignLen;

    signedHashes.data = 0;
    hashCtx.data = 0;

    clientRandom.data = ctx->clientRandom;
    clientRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;
    serverRandom.data = ctx->serverRandom;
    serverRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;


    if(isRsa) {
        /* skip this if signing with DSA */
        dataToSign = hashes;
        dataToSignLen = SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN;
        hashOut.data = hashes;
        hashOut.length = SSL_MD5_DIGEST_LEN;
        
        if ((err = ReadyHash(&SSLHashMD5, &hashCtx)) == 0) {    
            if ((err = SSLHashMD5.update(&hashCtx, &clientRandom)) == 0) {    
                if ((err = SSLHashMD5.update(&hashCtx, &serverRandom)) == 0) {    
                    if ((err = SSLHashMD5.update(&hashCtx, &signedParams)) == 0) {    
                        if ((err = SSLHashMD5.final(&hashCtx, &hashOut)) == 0) {    
                            hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
                            hashOut.length = SSL_SHA1_DIGEST_LEN;
                            if ((err = SSLFreeBuffer(&hashCtx)) == 0) {    
                                if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) == 0) {    
                                    if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) == 0) {    
                                        if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) == 0) {    
                                            if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) == 0) {    
                                                if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) == 0)    {    
                                                    err = sslRawVerify(ctx,
                                                                       ctx->peerPubKey,
                                                                       dataToSign,                /* plaintext */
                                                                       dataToSignLen,            /* plaintext length */
                                                                       signature,
                                                                       signatureLen);
                                                    if(err) {
                                                        sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                                                                    "returned %d\n", (int)err);
                                                    }
                                                }
                                            }
                                        }
                                    }
                                }
                            }
                        }
                    }
                }
            }
        }
    }
    else {
        /* DSA, ECDSA - just use the SHA1 hash */
        dataToSign = &hashes[SSL_MD5_DIGEST_LEN];
        dataToSignLen = SSL_SHA1_DIGEST_LEN;
        hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
        hashOut.length = SSL_SHA1_DIGEST_LEN;
        if ((err = SSLFreeBuffer(&hashCtx)) == 0) {    
            if ((err = ReadyHash(&SSLHashSHA1, &hashCtx)) == 0) {    
                if ((err = SSLHashSHA1.update(&hashCtx, &clientRandom)) == 0) {    
                    if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) == 0) {    
                        if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) == 0) {    
                            if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) == 0) {    
                                err = sslRawVerify(ctx,
                                                   ctx->peerPubKey,
                                                   dataToSign,                /* plaintext */
                                                   dataToSignLen,            /* plaintext length */
                                                   signature,
                                                   signatureLen);
                                if(err) {
                                    sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                                                "returned %d\n", (int)err);
                                }
                            }
                        }
                    }
                }
            }
        }
    }

    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;

}

as you can see this version of the function is quite a bit longer as well as being deeply nested. this is the kind of code that actually makes programmers think the use of GOTO isn't as bad as their teachers told them it was, because that deep nesting makes the function seem more complex and more difficult to read. on top of which there is a considerable amount of duplicated code. neither of these things are appealing to programmers because they make reading and maintaining the code more work.

however, this is the most simple-minded and unimaginative way to refactor the original function. if we were to also tackle that complex pattern used in virtually all of the IF statements at the same time as getting rid of the GOTOs, we would instead get something like this:

static OSStatus SSLVerifySignedServerKeyExchange(SSLContext *ctx, bool isRsa, SSLBuffer signedParams, uint8_t *signature, UInt16 signatureLen)
{
    OSStatus        err;
    SSLBuffer       hashOut, hashCtx, clientRandom, serverRandom;
    uint8_t         hashes[SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN];
    SSLBuffer       signedHashes;
    uint8_t         *dataToSign;
    size_t          dataToSignLen;

    signedHashes.data = 0;
    hashCtx.data = 0;
    err = 0;
    
    clientRandom.data = ctx->clientRandom;
    clientRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;
    serverRandom.data = ctx->serverRandom;
    serverRandom.length = SSL_CLIENT_SRVR_RAND_SIZE;


    if(isRsa) {
        /* skip this if signing with DSA */
        dataToSign = hashes;
        dataToSignLen = SSL_SHA1_DIGEST_LEN + SSL_MD5_DIGEST_LEN;
        hashOut.data = hashes;
        hashOut.length = SSL_MD5_DIGEST_LEN;
        
        err = ReadyHash(&SSLHashMD5, &hashCtx);
        if (err == 0)
            err = SSLHashMD5.update(&hashCtx, &clientRandom);
        if (err == 0)
            err = SSLHashMD5.update(&hashCtx, &serverRandom);
        if (err == 0)
            err = SSLHashMD5.update(&hashCtx, &signedParams);
        if (err == 0)
            err = SSLHashMD5.final(&hashCtx, &hashOut);
    }
    else {
        /* DSA, ECDSA - just use the SHA1 hash */
        dataToSign = &hashes[SSL_MD5_DIGEST_LEN];
        dataToSignLen = SSL_SHA1_DIGEST_LEN;
    }

    if(err == 0) {
        hashOut.data = hashes + SSL_MD5_DIGEST_LEN;
        hashOut.length = SSL_SHA1_DIGEST_LEN;
        err = SSLFreeBuffer(&hashCtx);
    }
    if (err == 0)
        err = ReadyHash(&SSLHashSHA1, &hashCtx);
    if (err == 0)
        err = SSLHashSHA1.update(&hashCtx, &clientRandom);
    if (err == 0)
        err = SSLHashSHA1.update(&hashCtx, &serverRandom);
    if (err == 0)
        err = SSLHashSHA1.update(&hashCtx, &signedParams);
    if (err == 0)
        err = SSLHashSHA1.final(&hashCtx, &hashOut);
    if (err == 0) {
        err = sslRawVerify(ctx,
                       ctx->peerPubKey,
                       dataToSign,                /* plaintext */
                       dataToSignLen,            /* plaintext length */
                       signature,
                       signatureLen);
        if(err) {
            sslErrorLog("SSLDecodeSignedServerKeyExchange: sslRawVerify "
                        "returned %d\n", (int)err);
        }
    }
    
    SSLFreeBuffer(&signedHashes);
    SSLFreeBuffer(&hashCtx);
    return err;



not only does this follow almost exactly the same format as the original function (thereby retaining it's readability), it makes the condition checking simpler and easier to read, and it has virtually the same number of lines of code as the original.

combining assignment and equivalence tests into a single line IF statement was clearly intended to reduce the overall size of the source code but it failed, and in the process it made the code more complex and difficult to read. the combined assignment/condition checking and GOTO statements were complementary to each other. they supported each other and jointly contributed to the complexity of the original function.

this third version of the function, by contrast, has neither complex expressions nor the potential for complex control flow. the only real complaint one might make is that after an error occurs in one of the many steps in the method, the computer still needs to perform the "if(err == 0)" check numerous times. however that is only true if the compiler can't optimize that code, and checking the same variable against the same constant value over and over again seems like the kind of pattern a compiler's optimization routines might be able to detect and do something about.

complexity is the worse enemy of security, sloppiness begets complexity, and GOTO is a crutch for sloppy, undisciplined programmers - it is part of that sloppiness and contributes to that complexity, even when it's supposedly used the right way. what i did above isn't rocket science or magic. the same basic technique can be used in any case where GOTO is used to jump forward in the code (if you're using it to jump backward then god help you). the excuses people trot out for the continued use of GOTO not only make them sound like dumb-asses, it leads to lesser programmers trying to follow their lead and doing much worse at it. it is never used as sparingly as the gurus think it should be, and even their own examples occasionally contain redundant invocations of it, thoughtlessly applied.

if you actually work at adhering to structured programming rather than abandoning it the moment the going gets tough, you will eventually learn ways to make it just as easy as unstructured programming, you'll be a better programmer for having done so, and your programs will be less complex, easier to validate, and ultimately more secure.

Tuesday, March 11, 2014

the case against GOTO in security

i could have made this longer but i have a feeling it might be more powerful in this form.

there is no programming construct that offers more freedom and flexibility than GOTO. consequently, no programming construct carries with it a greater potential for complexity

since "complexity is the worst enemy of security", therefore GOTO should be considered harmful to and/or an enemy of security.

i'm surprised more people haven't made this connection, or that it hasn't seen more mainstream attention. whatever else you may think of GOTO in regular software, in security-related software this has to be an added consideration. the traditional taboos against GOTO that Larry Seltzer identified may not be entirely rational, but i tend to think the security taboo against complexity is.


Tuesday, February 25, 2014

goto fail, do not pass go, do not collect your next paycheck

by now you've probably heard about the rather widely reported SSL bug that Apple quietly dropped on friday afternoon, teasing security researchers into finding out what was up. if not, the gist of it is that the C code Apple used for verifying signatures used in SSL had what appears to have been a copy&paste error that broke the security and allowed people to read your supposedly secure traffic. literally there were 2 lines that said "goto fail;" when there should only have been one. now i'm not about to make a big deal about copy&paste errors because that can legitimately happen to anyone, but i am going to make a big deal about the content of that copy&paste error.

the overall lack of acknowledgement (and in some cases denial) that the use of goto represents a deeper problem in Apple's application security is itself suggestive of a failure to recognize a fundamental principle: in software, quality is the foundation upon which security must stand.

goto is representative of the kind of spaghetti code we had before the introduction of structured programming approximately 50 years ago. no, that's not a typo. goto has been falling out of favour for about half a century and when i saw how much it was used in Apple's code it raised a red flag for me. every programmer i broached the subject with similarly found it concerning - including my boss who admitted he hasn't coded in C in over 30 years. he wondered, as do i, if the programmer responsible for the code in question still has a job at Apple.

you may think me rehashing a decades old debate, and perhaps i am - i wouldn't know, i never read any of that stuff - Edsger Dijkstra's letter "Go To Statement Considered Harmful" was published about 7 years before i was born. what i'm not doing, however, is mindlessly repeating dogma. we're interested today in application security and, as we have already covered, that requires software quality. structured programming produces software that is higher quality, easier to read, easier to model, easier to audit, easier to prove, etc. than unstructured programming.

why is this so? to answer that we need to think about what "structured programming" means. it is the nature of structure (all structure) to serve as a kind of constraint. your bones, for example, provide structure for your body and in so doing limit the way your body can move and bend. the support pillars for a bridge provide structure for that bridge and limit the extent to which the bridge can move and bend (yes, they bend and flex, but if the structure is doing it's job they only flex a little). likewise, code that follows the structured programming paradigm is constrained such that program control flows in a more limited number of ways. reducing the number of possibilities for program control flow makes it easier to predict (almost by definition) how a block of code's control will flow with just a quick glance. fewer possibilities mean it's easier to know what to expect. i'm sure you've seen the same effect with the written word. just like it's easier to read and understand sentences made with familiar words and phrases and following a few familiar construction patterns, the same is true for reading and understanding code as well. it's just another language, after all.

that reduction of possibilities also reduces the complexity of the code, which makes building a mental model of an arbitrary block of code easier. the constructs that make structured programming what it is lend themselves more naturally to abstraction since random lines of code within them are unlikely to cause program control to jump to random other places in the code. making it easier to build a mental model of the code makes it easier to formally prove the code's correctness because it's easier to describe the algorithm you're trying to prove. less formally, greater ease in building accurate mental models of the code means that it's easier to anticipate outcomes, especially unwanted ones that you want to eliminate, before they happen because it becomes easier to run through those possibilities in your head.

finally, both the greater ease of reading/understanding and the greater ease of modeling benefit efforts of others to review or audit the code. they're really only doing the same thing the programmer him/herself would have done by reading and understanding the code, creating a mental model of it, and trying to anticipate unwanted outcomes.

i work as a programmer professionally in a small company with not a lot of resources. we fly by the seat of our pants in some ways, but if someone asked me to review code that relied as much on goto as this source file from Apple, i wouldn't accept it. i'm surprised that code like that is able to survive for so long in a company with as many resources as Apple has. it makes me wonder about the programming culture within the company and it reminds me of a talk Jacob Appelbaum gave not too long ago where he accused them of writing shitty software. sure code reviews and more rigorous testing might have found the copy&paste error that sparked this off, but those processes don't add quality, they subtract problems. it's still a garbage-in/garbage-out sort of scenario so there's only so much they can do to affect the quality of Apple's software. quality has to go into those filters before you can get quality out.

i've often heard it said that regular programmers typically don't understand the nuances involved in writing secure code, especially when it comes to crypto, and having seen programmers more senior than myself flub crypto code i can certainly agree with that sentiment. that being said, i think it's probably also true that regular security people typically don't understand the nuances involved in writing quality code. since quality is a prerequisite for security, it's just as important for a programmer responsible for security-related code to have mastered the coding practices and techniques that lead to quality software as it is for them to understand secure software development.

i'll concede that it may well be possible for a master programmer to produce high quality, highly readable code that relies as heavily on goto as Apple's programmers appear to; but, as "The Tao Of Programming" satirically points out, you are not a master programmer, almost none of you are, so stop pretending and learn the lessons they've been trying to teach for the past 50 years.

(and now i'll go read Dijkstra's letter. maybe this is a rehash, but that wouldn't make it wrong even if it were)

(updated to fix the spelling of Jacob Appelbaum's name. thanks to Martijn Grooten)

Saturday, November 02, 2013

AV complicity explained

earlier this week i wrote a post about the idea of the AV industry being somehow complicit in the government spying that has been all over the news for months. some people seemed to really 'get it' while others, for various reasons, did not; so i thought i'd try to be a little more clear about my thoughts on the subject.

the question that the EFF et al have put towards the AV industry (besides having already been asked and answered some years ago) is a little banal, a little pedestrian, a little sterile. real life is messy and complicated and things don't always fit into neat little boxes. i wanted to try to get people to think outside the box with respect to complicity, what it means, what it would look like, etc. but i think some people have a hard time letting go of the straightforward question of complicity that has been put forward so let's start by talking about that.

has the NSA (or other organization) asked members of the AV industry to look the other way and has the AV industry (or parts thereof) agreed to that request? almost certainly the NSA has not made such a request, for at least a couple of reasons:

  1. telling people about your super-secret malware is just plain bad OpSec. if you want to keep something secret, the last thing you want to do is tell dozens of armies of reverse engineers to look the other way.
  2. too many of the companies that make up the AV industry are based out of foreign countries and so are in no way answerable to the NSA or any other single intelligence organization.
  3. there's quite literally no need. there are already well established techniques for making malware that AV software doesn't currently detect. commercial malware writers have been honing this craft for years and it seems ridiculous to suggest that a well-funded intelligence agency would be any less capable.


now while it seems comical that such a request would be made, to suggest that the AV industry would agree to such a request would probably best be described as insulting. whatever you might think of the AV industry, there are quite a few highly principled individuals working in that would flat out refuse, in all likelihood regardless of what their employer decided (in the hypothetical case that the pointy-haired bosses in AV aren't quite as principled).

now please feel free to enjoy a sigh of relief over the fact that i don't think the AV industry has secretly agreed to get into bed with the NSA and help them spy on people.

done? good, because now we're going to take a deeper look at the nature of complicity and the rest of this post is probably not going to be nearly as pleasant.

here's one of the very first things wikipedia has to say about complicity:
An individual is complicit in a crime if he/she is aware of its occurrence and has the ability to report the crime, but fails to do so. As such, the individual effectively allows criminals to carry out a crime despite possibly being able to stop them, either directly or by contacting the authorities, thus making the individual a de facto accessory to the crime rather than an innocent bystander.

in the case of government spying we may or may not be talking about a crime. the government says they broke no law and observers speculate that that may be because they've subverted the law (much like they subverted encryption algorithms). so let's consider a version of this that relates to ethical and/or moral wrong-doing instead of legal wrong-doing:
an individual is complicit in wrong-doing if he/she is aware of it's occurrence and has the ability to alert relevant parties but fails to do so. as such, the individual effectively allows immoral or unethical people to carry out their wrong-doing despite possibly being able to stop them either directly or by alerting others who can, thus making the individual a de facto accessory to the wrong-doing rather than an innocent bystander.

in this context, could the AV industry be complicit with government spying? perhaps not directly, not in the sense that they saw what the government was doing and failed to alert people to that wrong-doing. however, what about a different wrong-doing by a different entity but still related to the government spying?

hbgary wrote spyware for the government. this became public knowledge in the beginning of 2011. by providing the government with tools to perpetrate spying they become accessories to that spying.

hbgary was and is a partner of mcafee. now what is the nature of this partnership? hbgary is an integration partner. they make technology that integrates into mcafee's endpoint security product to extend it's functionality. mcafee does marketing/advertising for this technology and by extension for hbgary, giving them exposure, lending them credibility, and generally helping them make money. that money is almost certainly re-invested into research and development of hbgary's products, which includes governmental malware that's used for spying on people/organizations. there are mcafee customers out there right now whose security suite includes components that were written by known malware writers and endorsed by mcafee (although they make sure to weasel out of responsibility for anything going wrong with those components with some fine print). mcafee didn't break off the partnership when hbgary's status as an accessory to government spying became known, and since they didn't break off the partnership you can probably make a safe bet that they didn't warn those customers that part of their security suite was made by people aiding the government in spying either. even if we ignore the fact that mcafee aids a business that writes malware for the government, mcafee's failure to raise the alarm about the possible compromising nature of any content provided by hbgary makes them accessories to hbgary's wrong-doing. by breaking ties with hbgary and warning the public about what hbgary was up to they could have had a serious impact on hbgary's cash flow and hurt their ability to win contracts and/or execute on their more offensive espionage-assisting projects. they didn't do any of that and that makes them complicit in the sense discussed a few paragraphs earlier.

the rest of the AV industry may not be directly aiding hbgary's business but, like mcafee, they have failed to raise any alarm about hbgary. they could have done much the same as mcafee by warning the public, with the added bonus that they would have hurt one of the biggest competitors in their own industry while they were at it and that would have benefited all of them (except mcafee, of course). again, failing to act to help prevent wrong-doing makes them a de facto accessory to that wrong-doing. the AV industry as a whole is complicit in the sense discussed earlier.

of course, the AV industry isn't alone in being accessories to an accessory to government spying, and that brings up a consideration that should not be overlooked because there is a larger context here. historically, the culture of the AV industry has been one that values being very selective in things like who to trust, who to accept into certain groups, etc. add to that a very narrowly defined mission statement (to fight viruses and other malware) and it's little wonder that the ethical boundaries that developed in the early days were so dead-set against hiring, paying, or doing anything else that might assist malware writers or possibly promote malware writing. heck, i knew one member who wouldn't even engage viruses writers in conversation, and another who said he was wary of hiring anyone who already knew about viruses just in case they came by that knowledge through unsavoury means. aiding malware writers, turning a blind eye to their activities, etc. are things that normally would have violated AV's early ethical boundaries.

by contrast, the broader security industry is highly inclusive and has long viewed the AV industry's selectivity as unfair elitism. that inclusivity means that the security industry isn't actually just one homogeneous group. there are many groups, from cryptographers to security operations personnel to vulnerability researchers to penetration testers, etc. each one has it's own distinct mission statement and it's own code of ethics. what do you think you get from a highly inclusive melting pot of security disciplines? well, in order for them to tolerate each other, one necessary outcome is a very relaxed ethical 'soup'. many quarters openly embrace the more offensive security-related disciplines such as malware creation. in order for AV to integrate into this broader security community (and they have been, gradually, over time), AV has to loosen it's own ethical restrictions and be more accepting.

so while the AV industry failed to raise the alarm about hbgary, the broader security industry failed as well. the difference is that ethics in the security industry don't necessarily require raising an alarm over what was going on. hbgary is a respected company in security industry circles and it's founder greg hoglund is a respected researcher whose proclivity for creating malware has been known for a long, long time. as far as the security industry is concerned, hbgary's activities don't necessary qualify as ethical wrong-doing. there will probably be those who think it does, but in general the ethical soup will be permissive enough to allow it, and without being able to call something "wrong-doing" there can be no complicity. this is where AV is going as it continues to integrate into the broader security community. in fact it may be there already. maybe that's the reason they didn't raise the alarm - because they've become ethically compromised, not as a result of a request from some intelligence organization, but as a result of trying to fit in and be something other than what they used to be.

in the final analysis, if you were hoping for a yes or no answer to the question of whether AV is in any way complicit in the spying that the government has been doing (specifically, the spying done using malware), i'm afraid you're going to be disappointed. it depends. based on AV's earlier ethics the answer would probably be yes. based on the security community's ethics the answer may well be no. where is the AV industry now? somewhere between what they were and what the broader security community is. ethical relativity is unfortunately a significant complicating factor. then again, i'm an uncompromising bastard, so i say "yes" (after all, i did grow up with those old-school ethics).