Indicators of Behavior (IoB) - Feb 2023
OCA Community Connect
Roseann Guttierrez | Rating 0 (0) (0) |
https://opencybersecurityalliance.org/ | Launched: Jan 24, 2024 |
Season: 1 Episode: 1 | |
In this podcast episode, Charles Frick, a Chief Scientist at Johns Hopkins University Applied Physics Laboratory, discusses the Indicators of Behavior (IOB) subproject under the Open Cybersecurity Alliance. He explains the need for open standards to represent cyber adversary behaviors, aiming to share detections with longer shelf lives than current Indicators of Compromise (IOCs). Charles also emphasizes the importance of automation in cybersecurity to keep pace with adversaries and calls for community involvement to improve reference implementations, partner with other initiatives, and contribute to the project's GitHub repository. He invites feedback, collaboration, and volunteer efforts to advance the project's goals.
Blog on Indicators of Behavior (IOB)
https://opencybersecurityalliance.org/introducing-the-indicators-of-behavior-iob-sub-project/
Reference Links:
Open Cybersecurity Alliance (OCA) website:
https://opencybersecurityalliance.org/
Open Cybersecurity Alliance (OCA) GitHub
https://github.com/opencybersecurityalliance
Open Cybersecurity Alliance (OCA) YouTube
https://www.youtube.com/channel/UCjTpPl2oEGH_Ws251m827Cg
Share Your Ideas & Guest Suggestions!
Got a topic or an expert in mind for "OCA Community Connect"? We’re always on the lookout for fresh insights and voices in cybersecurity and open-source innovation.
How to Contribute:
Topics: Tell us what you’re curious about in the cybersecurity world.
Guests: Know someone who’d be a great interview? We’d love to hear about them.
Reach Out: Drop us an email or message us on social media. Your suggestions help shape our show, and we can’t wait to hear from you!
SUBSCRIBE
Episode Chapters
In this podcast episode, Charles Frick, a Chief Scientist at Johns Hopkins University Applied Physics Laboratory, discusses the Indicators of Behavior (IOB) subproject under the Open Cybersecurity Alliance. He explains the need for open standards to represent cyber adversary behaviors, aiming to share detections with longer shelf lives than current Indicators of Compromise (IOCs). Charles also emphasizes the importance of automation in cybersecurity to keep pace with adversaries and calls for community involvement to improve reference implementations, partner with other initiatives, and contribute to the project's GitHub repository. He invites feedback, collaboration, and volunteer efforts to advance the project's goals.
Blog on Indicators of Behavior (IOB)
https://opencybersecurityalliance.org/introducing-the-indicators-of-behavior-iob-sub-project/
Reference Links:
Open Cybersecurity Alliance (OCA) website:
https://opencybersecurityalliance.org/
Open Cybersecurity Alliance (OCA) GitHub
https://github.com/opencybersecurityalliance
Open Cybersecurity Alliance (OCA) YouTube
https://www.youtube.com/channel/UCjTpPl2oEGH_Ws251m827Cg
Share Your Ideas & Guest Suggestions!
Got a topic or an expert in mind for "OCA Community Connect"? We’re always on the lookout for fresh insights and voices in cybersecurity and open-source innovation.
How to Contribute:
Topics: Tell us what you’re curious about in the cybersecurity world.
Guests: Know someone who’d be a great interview? We’d love to hear about them.
Reach Out: Drop us an email or message us on social media. Your suggestions help shape our show, and we can’t wait to hear from you!
Roseann Guttierrez [00:00:00]:
I wanted to introduce our guest speaker. His name is Charles Frick. He's a chief scientist at Johns Hopkins University Applied Physics Laboratory, and he's here to talk a little bit about our indicators of behavior subproject. So, Charles, I'll I'll let you actually do, I know I didn't do much of an intro there for you, so did you have anything you wanna add To your intro before I ask you some questions?
Charles Frick [00:00:20]:
Just one small minor thing, because I don't wanna, I don't wanna sell myself too highly. I'm a chief scientist at Johns Hopkins Applied Physics Lab, because we have a I oversee research In our cyber operations area for something we like to call the Capabilities Development Group, but just out of respect for my fellow chief scientists out there, at the laboratory. Just wanted to caveat that I tend to focus more in Cybersecurity Automation and Threat Intelligence, sharing research that we do here.
Roseann Guttierrez [00:00:51]:
To start off then, I just have a couple questions for you. Give me your elevator pitch on what the IOB is.
Charles Frick [00:00:59]:
Absolutely. So thank you. First of all, thanks, everybody, for having me today. So one of the hats I wear is I'm also the chair for the indicators behavior subproject under the Open Cybersecurity Alliance. And as for the elevator pitch, we were looking for help to define.. ways to use Open standards to represent cyber adversary behaviors because we wanted to be able to share detections and ways to correlate detections that were a little bit more effective and had longer shelf lives than a lot of the current work we're seeing in sharing indicators of compromise. Because IOCs, for those of us familiar with them, very actionable, for the most part, insanely short shelf lives. Of once you see it in the wild it's not there very long, as an active threat. We wanted to be able to share things that would persist longer and we look at a lot of the analytics being shared, and there's some really great analytics being put out there that can be used for longer periods of time, but they're very tailored to specific campaigns.
Charles Frick [00:02:07]:
And a lot of folks do that because we wanna reduce the false positive rate because we still have this mindset of the analytic is gonna go directly to feed something like our SIEM, and we're still gonna have a human looking at those, so we need low false positives. So to make a highly accurate analytic, it's really tailored to this is APT, umpty squat, doing campaign X, and if they if that group does this again, this might detect that. Well, we wanted to think about it a little differently and say, what if I have kind of a 2 Some... 2 pass type of analytics where automation would be looking at the 1st wave. And so I might not care if it has a high false alarm rate, But what I care about is if these 2, 3, 4 analytics have data that you can correlate across. And so I might see the pattern like, one of the patterns we do in our example is I wanna know anytime a Mail client launches a web browser, I'm not gonna alert a human to that, but If a if the same machine has the mail client open a web browser, the web browser access a macro enabled office file, and that same computer modified the system registry within a certain timeline. I care about that, the fact that that happened on the same machine. Right.
Charles Frick [00:03:30]:
And if that user account that, that was opening that email happens to be tied to launch eventually spawning a process that's now owned by the system on that computer, I care about that. And If there was other processes that started sending weird network traffic to a domain controller. I care about that in itself, but the fact that that can be tied and have those bread crumbs, That makes it a lot easier to detect with something that there's some weird behavior going on that I is probably tied to an adversary, and so we wanted to look at how we could share those detections but also share how you cross correlate that, and that led us to start our work that we're doing currently on Indicators of Behavior. Long elevator pitch. Just... it was a very tall building.
Roseann Guttierrez [00:04:26]:
No. Understood. Understood. So What makes it important to you, this working group, this sub project?
Charles Frick [00:04:33]:
So what makes it important to me, really is trying to really seeing the need to push forward our, network defense and better leverage automation. I've been doing, you know, cybersecurity automation pilots and research, you know, for close to 10 years now and I keep seeing in the majority of our organizations still heavily mired in the manual process, which is completely unsurvivable.
Roseann Guttierrez [00:04:59]:
Right
Charles Frick [00:04:59]:
know, the bad guys, they're using automation. I think it I'll get the attribution wrong. I think it was Michael Daniel who want who said it first, though, that that's bringing people to a software fight, and that's never gonna win. But so we started looking at automation, and I keep seeing us focus so much on Indicators of Compromise and blocking them. And that's very important to do in a very short time frame, and I'm talking minutes. Where we've had our best success is normally having a community block an IOC within 3 to 5 minutes of it first being seen in the wild, and knowing that It's going to age off in a matter of days, for the most part. We can have the debate on file hashes, but, you know, The bad guys aren't stupid, and they don't stay you know, amazingly, they also know how to look up their IP addresses and VirusTotal. And if they see their their infrastructure publicly being broadcast as a cyberattack, they move on because they want to keep attacking.
Roseann Guttierrez [00:06:05]:
As you would.
Charles Frick [00:06:06]:
And so it's got us looking at we need a way to get this better, and the answer can't be make more and more super complex things that can need Master's degrees and PhD level computer scientists to execute because our workforce, that doesn't exist, at the scale we need it. And so we need the vendors and the MSSPs to be able to scale out. And for them to scale out, what we saw in in other forms, we need better standards to to to define and standardizing information so that they can start adapting tools and automation on behalf of their customer bases. And so that's why I care about this.
Roseann Guttierrez [00:06:49]:
Nice. Alright. So speaking about needs then, what what does your subproject need? Where could you use some help?
Charles Frick [00:06:57]:
Well, we are always looking for folks to help review some of the, developments that we're doing on our reference implementations. I'd be doing a bad job if I didn't do a plug for the IOB repository on GitLab. I'm I'm sorry. GitHub. But, we regularly post public releases of some of our samples, we're using STIX with several extensions right now and some custom objects and we would love to get more feedback and so, folks that are willing to take a look at our samples and provide some constructive criticism on suggestions for improvements is always welcome. Additionally, we are looking to, over the coming year, partner with a few other initiatives throughout not just the OCA, but other opportunities that might arise and people that wanna volunteer their time or their organizations that wanna volunteer some time to participate together with the standard, They're always welcome and can be greatly helpful because we can develop some of our kind of simple analytical tools and things to help make some parts of this easier, but it does take a a community. And so anybody that's willing to work I know, We also have some active groups looking to build out ontologies. If folks out there are interested in that, we welcome your contributions.
Charles Frick [00:08:24]:
Basically, just anybody that wants to help in any way whatsoever with designing out machine readable data, we'll we'll find a job for you. And if there's something you think we're not doing, we're pretty glad to let you take lead in getting us to start doing it.
Roseann Guttierrez [00:08:38]:
Awesome. Thanks so much, Charles.