For this ThreatModeler Blog Special Edition, we recap our Fireside Chat (with link to the webcast) moderated By Ty Sbano, Chief Security & Trust Officer, Sisense; with panelists:

  • Praveen Nallasamy, Vice President, Cybersecurity at BlackRock
  • Tom Holodnik, Software Architect, Intuit
  • Archie Agarwal, Founder and CEO, ThreatModeler
  • Reef D’Souza, Senior Security Consultant, AWS
  • Yeukai Sachikonye, Consultant, Engagement Manager of Global Security & Infrastructure Practice, AWS

Organizations the world over have found great success in offloading their software development to the cloud.

For the broader organization, it offers massive efficiencies and cost savings. For developers, it can dramatically accelerate workflows, and provide advanced computing and storage capabilities out of reach for all but the biggest enterprises, among a seemingly endless array of benefits.

Still, many organizations have yet to take the plunge. Their reasons are varied, of course, but the task of getting your organization ready for cloud development often requires more sophisticated security knowledge than the average functionality-focused developer can provide.

To help organizations across the finish line, we at ThreatModeler organized a fireside chat with global leading industry experts to discuss preparing one’s organization for migrating workflows to Amazon Web Services (AWS). The group also discusses the challenges they faced along the way.

Security Frameworks Provide a Department-By-Department Blueprint of Cloud Migration Best Practices

Chief amongst the many selling points of the cloud is its pricing efficiencies. However, poor migration planning and not considering native AWS features like elasticity might increase your overall costs.

How can you avoid that?

Yeukai Sachikonye works with customers on Amazon Web Service’s security and infrastructure team, collaborating in significant organizational change. Amazon Web Service’s CAF framework educates acts as a blueprint for the organizational evolution necessary to migrate. It offers guidance for each department and outlines the new skills and processes that must be learned and adopted, including cloud-native processes.

Since cultural change is a required component of migration, this educational effort needs to start at the top of the organizational chart. For a security-focused culture to stick, it has to be vocally endorsed by upper management.

“We start with the organization’s leadership, and we really work with them to make the cultural shift,” she said. “From speaking about moving to cloud – and the worries they have about the security – to actually doing that in action.”

CAF is a tremendous origination-wide tool to help educate, upskill, train, and implement secure and sustainable processes and structures within an organization.

When Considering the Cloud, First Take an In-Depth Look at Your Workflow

Security consultant Reef Dsouza recommends organizations first consider their workflow, their underlying business need, and the risks associated with that workflow.

From there, organizations may work backward to tie this risk profile to main security capabilities, internal standards, or, if it applies, regulatory requirements.

Next, conduct threat modeling to automatically expose vulnerabilities and then task your development team to mitigate this backlog. Take an agile approach to develop these security capabilities.

Finally, because moving to the cloud involves deploying your infrastructure all as code, you can transform this into a software development function.

Why Threat Modeling?

Threat Modeling in its simplest form is an exercise to enumerate potential threats. With this process, you understand the entry points, the assets and all the paths an attacker can take to get to your assets. Archie Agarwal, Founder and CEO of ThreatModeler Software, Inc., explains:

“Traditionally, a group of security experts would take an architectural diagram and do ‘evil brainstorming’ to identify all the ways hackers can hack into the system, then what controls can be put in place to mitigate the risk. This process is extremely useful in understanding the attack surface … We’ve seen the evolution of threat modeling from 2005 to 2020.”

Until a few years ago, the cybersecurity industry relied on scanning tools to identify threats and vulnerabilities. The challenge with that is the scanning tool does not provide the overall understanding of an attack surface – it only looks at vulnerabilities in isolation, resulting in the inaccurate depiction of the overall security posture and incomplete information about the security issues. On the other hand, threat modeling is very effective for gaining an overall understanding of your security posture. It should be used as an identification process.

Archie continues, “Scanning tools are going to compliment it by verifying if those threats are mitigated. Now companies have realized that the proactive nature of threat modeling allows them to identify those potential security issues in the design stage. It also provides the guidance for developers to build security into the code.” Archie affirms that the result is a reduction in the overall cost of security.

Identity and Access Management (IAM) Help Reduce Your Organization’s Attack Surface

Moving to the cloud means considering the potential scale of your operations, too. According to Ty Sbano, Chief Security & Trust Officer, Sisense, you should plan for workloads at multi-region, multi-account scale.

This added scalability will add some complexity to your identity and access management efforts, beyond defining role-based access, control, and authorization, extending to your account structure and governance.

Defining the security groups that make up your IAM system creates the bulk of the planning work here. Consider your security groups, your VPC designs, as well as your VPC connectivity, too, if you have a hybrid environment.

We generally recommend the creator consumer manager framework, where individual’s gain access to applications and various resources based on their function. Developers, testers, architects, and DevSecOps automation specialists are Creators. Meanwhile, Managers need access to materials for conducting governance, compliance, and auditing. Consumers should have the least access of all, only to public-facing components.

Next, consider your environment’s visibility. You’ll want to put in place centralized logging monitoring, and from there, firmly define security incidents so that alerts may be set up.

Lastly, define the mechanisms that provide access, carefully tracing this logic to ensure that low-level users can’t elevate their access.

How to Handle Data Protection

How do you keep your data protected? This is perhaps the most common question in cybersecurity, and for good reason.

Many approach cybersecurity assuming that encryption is the silver bullet — the end-all-be-all of locking hackers out of environments. In reality, though, encryption isn’t wholly secure. Not due to flaws in the encryption process, but due to the network endpoints encryption communicates with that are frequently the target of attacks.

But while encryption isn’t a silver bullet, it’s still a necessity to protect customer data.

Again, consider the future scale of your network, and assess how you might encrypt your data at scale across user accounts, services, hierarchies, and critical lifecycle processes.

Tom Holodnik, Software Architect at Intuit, said his organization provided many layers of security around their customer data. After conducting threat modeling — to surface all existing security risks and mitigate each and everyone — a total of four or five layers of protection were made.

“It’s really a matter of protecting data, setting standards, and enforcing them,” Holodnik said. “You should be operating your environment in AWS in a manner consistent with the standards. We can be certain; we enforce these standards with automated checking and automated compliance tests.”

Look at the reachability to those data assets from distributed components. Then map the availability of security controls. Assume the failure of all the components in your environment that store, or process sensitive data. Think about all the ways those components can fail, and how they can be better designed for greater resilience.

You’ll also want to ensure there are as few mistake-prone human beings near your data as possible, to reduce accidents. Automation should drive data flows to ensure that your network environment behaves as consistently as possible.

After data protection features have been successfully integrated, Holodnik said integrity validation checks should be run frequently.

Incident Response Is Your Plan for When All Else Fails

 Much of modern cybersecurity relies on reactive approaches, tightening up security around specific areas to plug massive holes in your attack surface.

Incident response, the final security consideration, is about proactive security.

Incident response takes as a given that, with the enormous and increasing volume in the number and sophistication of cybersecurity attacks, it’s in an organization’s best interest to ensure it can quickly identify an attack in-progress and shut it down to mitigate further damage.

Steps should be taken to implement incident response within AWS environments, which has native tools for such a task. Teams should be looking for all possible entry areas and the steps that could be taken using AWS tools to respond to them. Automated responses and scripts should be made to address a range of possible entry methods.

Lessons Learned in Cloud Migration

Just as no two snowflakes are alike, cloud migrations are hugely complex projects that bring a unique range of challenges to organizations developing on the cloud.

Our panelists discussed their most important lessons in the pursuit of cloud security.

Don’t Assume Perfect Behavior From Your Third Parties

Third parties have been well known to expose to security and compliance issues, often allowing malicious actors to move through environments, escalate their privileges, and lead to an attack.

What’s more, third-party risk factors appear to be on the rise.

Tom Holodnik from Intuit said that all too often, organizations treat third parties like just another trusted application in their tech stack.

“I think that one of the first things we need to do is to assume that some clients are going to be entirely trustworthy. I know lots of people who play by the rules,” he said. “But there are some there are some who don’t.”

The threat modeling process should be done with third-party risk in mind, according to Holodnik. Instead of assuming trust, security infrastructure should assume, and plan for the worst-case scenario.

Engage Teams Early in the Migration Process

Creating cultural change, even in cybersecurity, is a challenging task.

Praveen Nallasamy, VP of Cybersecurity, BlackRock said he’s learned to engage teams as early as possible. “You don’t have to want the team to come to a point where they have access to everything. It’s not necessary at all.”

How to Put Practical Threat Modeling in Action With an Automated Tool

Integrating threat modeling into a DevOps process requires automation and collaboration, implementing a self-service model where the threat modelers can build a threat model themselves. It requires the right tooling to build a threat model in hours and not days, which has been the case with traditional approaches.

Through ThreatModeler’s Joint Offering With AWS, there are predominantly two use cases that the customer base applies when migrating to the cloud. Either they have workloads that are already deployed in the dev or test environments or they are literally starting from scratch. When the user is new to the cloud and are looking to start from scratch, what they want is an accurate representation of their architecture.

“For diagrams being deployed for AWS architectures, it’s simply a drag and drop activity where you’re trying to identify (within the workflow) what is it going to be inside a particular architecture,” says Nik Nagalia, Strategic Alliances and Solution Engineer for ThreatModeler. “From this point on, you simply start to visualize exactly what the threats and security requirements can be. You’re getting a visual representation of what can happen and where exactly does it impact your particular architecture.”

With automation, ThreatModeler enables you to create backlog items. Once the security review is complete, you can push the security requirements out to an issue tracking tool. As you start to implement security within the design phase itself, you will see the attack surface starting to reduce. Eventually you come to a point where you can figure out what exactly is happening within your architecture to scale with the latest and greatest.

“You can compare the architecture against compliance to determine what you’ve implemented as security requirements – in terms of what needs to be done to when it comes to these security configurations within the cloud.”

ThreatModeler supports MITRE, CAPEC, OWASP, NIST 853 and CIS among others. But the library continuously evolves and growing over time. ThreatModeler has its own proprietary methodology called VAST, similar to Microsoft STRIDE or PASTA. It’s been designed for DevSecOps, and provides speed and agility to development teams so they can reduce the time from days and weeks to hours to build a proper threat model.

Read more about the differences between VAST and other threat modeling methodologies.

The Cloud’s Capabilities Can Transform Organizations While Reducing Cost

Organizations that haven’t yet invested in cloud architecture stand to gain access to significant efficiencies in workflows, expansions in capabilities, and reductions in cost. Thanks to the maturation of cybersecurity tools and systems, making the switch is easier than it’s ever been.