mobile development

How NIST SP 800-53 Revision 5 Affects Application Security

Published May 27, 2021 WRITTEN BY MICHAEL SOLOMON Michael G. Solomon, PhD, CISSP, PMP, CISM, PenTest+, is a security, privacy, blockchain, and data science author, consultant, educator and speaker who specializes in leading organizations toward achieving and maintaining compliant and secure IT environments. The National Institute of Standards and Technology (NIST) is a non-regulatory agency of the U.S. Department of Commerce that, among other things, maintains physical science laboratories and produces guidance for assessing compliance with a wide range of standards. NIST has a long history of providing documents that help organizations and agencies assess compliance with cybersecurity standards and implement changes to strengthen compliance and security. The NIST Special Publication (SP) series provides “guidelines, technical specifications, recommendations and reference materials, comprising multiple sub-series.” One sub-series, SP 800, focuses on cybersecurity, specifically containing guidelines for complying with the Federal Information Security Modernization Act (FISMA). (If you are interested in digging further into NIST cybersecurity offerings, check out the relatively new SP 1800 cybersecurity practice guides as well.)  How NIST SP 800-53 Revision 5 Affects Application Security NIST SP 800-53, “Security and Privacy Controls for Information Systems and Organizations,” provides guidance for selecting the most effective security and privacy controls as part of a risk management framework. The latest revision of NIST SP 800-53, revision 5, was released on September 23, 2020. Revision 5 includes requirements for RASP (runtime application self-protection) and IAST (interactive application security testing). While these approaches to application security are not new, making them a required element of a security framework is a first. Let’s examine how NIST SP 800-53 revision 5 affects the secure application development process. What NIST SP 800-53 contains The initial version of SP 800-53 was released in 2005, titled “Recommended Security Controls for Federal Information Systems.” SP 800-53 always focused on federal information systems, at least up through revision 4. Then, revision 5 dropped the word “federal” from its title. That means SP 800-53 is now a more general guidance document that applies to commercial information systems as well as federal systems. This may seem to be a minor change, but it really means that NIST just expanded the scope of their recommendations. SP 800-53 isn’t a mandate at all, but it does signal a strengthening of guidance from the federal government for non-federal environments. SP 800-53 contains a catalog of security and privacy controls, organized into 20 control families. Chapter two of SP 800-53 “describes the fundamental concepts associated with security and privacy controls, including the structure of the controls, how the controls are organized in the consolidated catalog, control implementation approaches, the relationship between security and privacy controls, and trustworthiness and assurance.” The other major chapter of SP 800-53, chapter three, includes a catalog of security and privacy controls, each of which includes a discussion of that control’s purpose and how it fits into a layered security approach. The goal of SP 800-53 is to provide a consolidated guidance document that describes security and privacy controls, how they are related to one another, and how to best select, deploy and assess the controls required for specific use cases. What revision 5 means to application security Although SP 800-53 revision 5 provides general guidance for selecting security and privacy controls, a noticeable portion of changes since revision 4 focus on software. As […]

Read More

Biggest Cloud Breaches of 2020

Published May 20, 2021 WRITTEN BY ED TITTEL. Ed Tittel is a long-time IT industry writer and consultant who specializes in matters of networking, security, and Web technologies. For a copy of his resume, a list of publications, his personal blog, and more, please visit www.edtittel.com or follow @EdTittel 2020 was a year to remember, and that many would like to forget, for a variety of reasons ranging from the largest global pandemic since the Spanish Flu of 1918, to political turmoil in the USA over a fractious Presidential race, to economic and employment dips of epic proportions. And indeed, 2020 also came with a number of record-setting security breaches, nearly all of which involved the cloud in some form or fashion. In fact, there are numerous top 10 security breach collections among which to choose. One in particular is worth reciting, and then reflecting on the cloud’s presence in that itemized list. PCR is a leading information source for IT resellers and distributors in the United Kingdom. It reports its top 10 based on the number of records breached in the incidents selected. They cite the Risk Based Security Report to observes that nearly 3K breaches were reported just for Q1 2020, and the number records exposed at 36 billion (for the whole year of 2019, “only” 15 billion records were exposed). Here’s their top 10 list with some annotations and reflections, in ascending order by number of records breached: 10. Unknown source (201M): In January, 2020, security researchers found a database containing over 200M sensitive personal records online. The compromised host was on the Google Cloud Platform, so though the source or owner of the data remains unidentified, there’s no disputed that this collection of US personal and demographic data has a definite cloud connection. After Google was alerted to the matter, it took the server down over a month later. 9. Microsoft (250M): In January, 2020, MS itself reported a data breach on servers storing customer support analytics in its Azure Cloud. The records involved included email and IP addresses, plus support case details, stored on 5 ElasticSearch services, inadvertently disclosed owing to misconfigured security rules. 8. Wattpad (268M): In June, 2020, records belonging to this Canadian website and app for writers used to publish user-generated stories and text were exposed (later reports raise the count to 271M records). Malicious actors compromised the company’s SQL database which contained account information, email and IP addresses, and other personal data. Reports on this breach do not mention a specific cloud connection, but the site’s current DNS information appears to show it is hosted by Amazon Web Services (a definite cloud connection). 7. Broadvoice (350M): A US provider of Voice over IP (VoIP) services to business, October, 2020, reports confirm exposure of 350 million customer records from this company. Data disclosed includes names, phone numbers, and call transcripts, including calls to medical and financial services providers. Owing to a configuration error, security researchers were able to access ten of the company’s databases without providing access credentials. Broadvoice changed the configuration and notified relevant legal authorities. It’s not clear that these databases were cloud-based, though it’s hard to imagine a VoIP company NOT doing business in the cloud. 6. Estée Lauder (440M): In January, 2020, the company had an unprotected, unencrypted […]

Read More

Understanding RASP, and Putting It to Work

Published May 13, 2021 WRITTEN BY ED TITTEL. Ed Tittel is a long-time IT industry writer and consultant who specializes in matters of networking, security, and Web technologies. For a copy of his resume, a list of publications, his personal blog, and more, please visit www.edtittel.com or follow @EdTittel RASP is an initialism for runtime application self-protection. It’s a technology designed to boost application software security by monitoring inputs to running applications. RASP screens all such inputs, and blocks those that could be associated with potential attacks. RASP also protects various aspects of an applications runtime environment, and prevents changes to environment variables, access controls and privileges, and so forth. Gartner’s IT Glossary defines RASP as follows: “a security technology that is built or linked into an application runtime environment, and is capable of controlling application execution and detecting and preventing real-time attacks.” Numerous security companies offer RASP add-ins for widely used runtime environments, such as the Java Virtual Machine Specification and the .NET Common Language Runtimeli. In fact, developers generally choose to buy RASP tools from such third parties instead of building their own implementations. Putting RASP to Work When integrated into an application’s run-time, RASP incorporates security checks into supporting server environments. That is, RASP intercepts inputs sent to the application for screening, and either allows acceptable inputs or denies questionable or malicious inputs to actually reach the application. RASP also includes built-in logging and monitoring facilities so it can keep track of what it’s doing, and make sure its actions are appropriate and secure. RASP implementations seek to maximize valid interceptions (by preventing malicious or insecure inputs from obtaining application access). At the same time, RASP monitoring — and related updates from its makers — also seeks to avoid invalid interceptions (preventing legal and benign inputs from accessing the application). Ultimately, it makes sense to understand RASP as a validation tool for inputs and data requests made to applications inside their runtime environments. RASP Is An All-Purpose Technology Because it’s a plug-in that works with a range of runtime environments, RASP can handle both web-based and traditional (standalone executable-based) applications. Once present, RASP brings protection and detection capabilities to servers where targeted applications run. In addition, because RASP sees the overall application state and context, it does not work at the packet level as do application firewalls. RASP generally has a nuanced and informed view of application and input states across current ongoing interactions. A stateful view of application inputs gives RASP more scope and flexibility for protection. It can exhibit a variety of behaviors as it detects unwanted actions or malicious inputs that match security rules or policies in its knowledge base. An example of such behaviors, ordered by increasing order of severity, can include: Denial of offending input, with a warning message to the sending user Issue alerts for named recipients when offending inputs occur (usually administrators or security team members) Terminate user session upon offending input Terminate application upon offending input (does not otherwise impact the host server, and other services or applications) RASP implementations generally plug into existing server frameworks and runtime modules. Thus, they integrate with a  program’s code, associated libraries and API calls. Such integration is what gives RASP the ability to handle inputs in real-time as an application is executing. […]

Read More

Managing API Security for AI Programming

Published May 06, 2021 WRITTEN BY ED TITTEL. Ed Tittel is a long-time IT industry writer and consultant who specializes in matters of networking, security, and Web technologies. For a copy of his resume, a list of publications, his personal blog, and more, please visit www.edtittel.com or follow @EdTittel The best tool for securing use of application programming interfaces (APIs) – including those employed for AI programming – may be AI itself. Artificial intelligence is extraordinarily adept at modeling how APIs get used. This means that AI models can continuously examine and analyze API activity. This provides an opportunity to address oversights and issues that policy-based API coverage cannot handle. The timing on this technology is fortuitous, because Gartner predicts that API abuses will represent the most frequent attack vector that results in data breaches within enterprise web-based applications. These days, says InfoWorld, enterprises make use of authentication, authorization and throttling capabilities to manage APIs. Such tools are vital in controlling who accesses APIs within an enterprise IT environment. But these approaches do not address attacks that survive such filtering and scrutiny because of clever attacks embedded within apparently legitimate API calls and uses. Nowhere is this as apt as for AI programming itself, which represents a substantial and increasing share of programming activity within enterprises nowadays. Within an organization, it’s typical to use API gateways as the primary way to call and use APIs. Such gateways can enforce API policies by checking inbound requests against rules and policies that relate to security, throttling, rate limits, value checks, and more. In this kind of environment, both static and dynamic security checks can be helpful, and improve security within the applications they serve. Static Security Checks and Policies Static policy checks work well for quick simple analyses because they do not change with request volume or previous request data. Static security scans work well to protect against SQL injection attacks, cohesive parsing attacks, schema poisoning attacks, entity expansion attacks, and other attacks that depend on clever manipulation of APIs inputs. Static policy checks work when scanning incoming packet headers and payloads, and can match against already-known access patterns associated with attacks. This permits, for example, JSON payloads to be validated against predefined JSON schemas, and can screen against injection attempts of various kinds. An API gateway can also enforce element count, size, and text pattern limits or filters to forestall attempted buffer overflow or illegal command injections. Dynamic Security Checks and Policies Dynamic security checks, as the name implies, work against inputs and behaviors that can change. Typically, this means that inputs must be validating against some notion of state or status, as defined by previous inputs and data associated with them. Most often dynamic checks reflect the volume or frequency of API traffic and requests at the gateway. For example, throttling techniques depend on tracking previous activity volume to limit access when the number of prior API requests exceeds some predetermined (but adjustable) threshold. Rate limiting works in similar fashion – by curbing concurrent access allowed for some particular service or resource. While techniques based on authorization, authentication, throttling and rate limiting can be helpful, they do not address all the ways in which APIs might be attacked. Because API gateways typically serve numerous web services, their attendant APIs may […]

Read More

Securing Serverless Applications

Published Apr 29, 2021 WRITTEN BY ED TITTEL. Ed Tittel is a long-time IT industry writer and consultant who specializes in matters of networking, security, and Web technologies. For a copy of his resume, a list of publications, his personal blog, and more, please visit www.edtittel.com or follow @EdTittel Although the term says “serverless,” serverless applications don’t really run without any servers involved. Rather, serverless applications run inside cloud-based infrastructures so that developers and operators need no longer stand up and run their own servers, virtual or physical. That is, the application still runs on a server, but the responsibility for server management falls on the cloud service or cloud platform provider instead. That means that organizations need not themselves provision, scale, manage and maintain servers on which their applications run – they use a serverless architecture to build, test, deploy and run their applications and services for clients, customers, end-users, and so forth. AWS Lambda, for example, is a serverless service that includes automatic scaling, with high availability baked into the runtime environment, charged on a pay-for-value basis. As is typical for cloud-based runtime environments, serverless applications adhere to what’s often called a “shared security model.” Following this model means that the cloud provider is responsible for the security of the cloud while those who host their applications are responsible for security of their application  in the cloud. When organizations adopt serverless technologies, the responsibility that the cloud or application provider assumes climbs up the stack to include operating system and networking security for the servers it operates on which the organization’s serverless application runs. Theoretically this means that the job of security is easier for serverless applications than for cloud-based applications where the operating organization also stands up underlying virtual infrastructures. In fact, Amazon recommends (and most other cloud service and platform providers concur) that companies adhere strictly to the Principle of Least Privilege (PLP) and also follow best practices for securing their serverless applications. They recommend their own identity and access management (IAM) platform to secure and manage access to their services and resources, but similar capabilities are available from all of the major cloud platform providers including Azure, Google, Oracle, IBM, Alibaba and others as well. Proper use of identity and access management technology is indeed key to securing serverless applications. This includes access controls through accounts and groups or job roles, and specific constraints on how users may interact with serverless applications. These might pertain to days of the week, times of the day, originating IP addresses, as well as require use of SSL or other secure protocols, and even require multi-factor authentication (2FA or better) before allowing access to proceed. In addition, most cloud platforms’ identity and access management tools support access auditing and reporting, so the organization’s security team and administrators can confirm that prevailing policies provide only authorized public and private accounts with appropriate access to applications and their resources. In fact, organizations should use this reporting to tweak and adjust their security policies to enable access only to services in use, following PLP. Multi-Factor Authentication (MFA) makes most sense for privileged accounts and access (administrators, developers, architects and security staff) so that privileged access is available only to those who provide a hardware MFA device, or who use an authentication app […]

Read More

Comprehensive Guide to Cyber Insurance

Published April 22, 2021 WRITTEN BY THE KIUWAN TEAMExperienced developers, cyber-security experts, ALM consultants, DevOps gurus and some other dangerous species. Social media, advanced technology, and the growing popularity of business transactions over the web continue to determine how organizations operate and communicate with their prospective customers. However, they’re also gateways to cyberattacks and data loss. Whether launched by criminals, insiders, or run-on-the-mill hackers, the likelihood of a cyberattack exists, and both small and established organizations face the risk of moderate or severe harm. As a component of their risk management strategy, companies now have to routinely decide the risks to accept, control, avoid, or transfer. Risk transfer is where cyber insurance policies come into play. What Is Cyber Insurance?  It’s also called cyber liability insurance coverage (CLIC) or cyber risk insurance. In essence, the policy is designed to provide risk exposure mitigation to companies. It does this by offsetting any expenses the business incurs to recover after a security breach or any other cyber-related threat.  The concept entered the market in the early 2000s and has its roots in E&O (errors and omissions) insurance. Very few providers existed then, and the main threats covered included network security, viruses, and unauthorized access. A lot has changed from its initial inception. For instance, the earlier iterations mainly focused on third-party indemnity coverage. But as years went by, providers began including first-party coverage for credit monitoring, notification, crisis management, public relations, and identity restoration. Earlier on, the first-party coverages were sub-limited, contrary to the full limits available in the market right now. Soon after, additional like PCI penalties and fines, regulatory penalties and fines, first-party business interruption, and cyber extortion followed later. The recent years have seen the inclusion of social engineering, system failure coverage, and property damage to devices and hardware. Different advancements in the coverage’s scope are witnessed every year.   Types of Cyber Insurance Coverages Here are the different types of cybersecurity insurance coverages:  Cyber Security Insurance It’s also referred to as the Crisis Management Expense or Privacy Notification coverage. The insurance product covers you and your business against first-party damage but not against damage to third-parties. It specifically takes care of the immediate response cost after a data breach. Some of these costs include: Contracting forensic experts to ascertain the breach’s origin and give suggestions on practical approaches to site security and future breach prevention Paying a public relations service to help address the crisis Informing everyone whose personally identifiable information is compromised Monitoring the victims’ credit for 12 months Compensating the cost of restoring stolen identities Cyber Liability It’s also called the Information Security and Privacy Insurance and covers liability for breach damages. Direct response costs aren’t covered. It’s ideal for e-commerce agencies and those that keep client data in their internal electronic network. Common breaches involve the following types of personal or financial data: Credit card numbers Social security numbers Bank account details Health information Intellectual property or trade secrets Technology Errors and Omissions Also called E&O or Professional Liability, the liability coverage protects corporates that offer technology products and services. It protects you from bearing the entire cost of defending yourself when a civil lawsuit awards damages after a customer’s negligence claim. Apart from the companies selling and servicing computer products, the insurance also includes advertising […]

Read More

Canary in a Coal Mine: Detecting Cyberattacks Early

Published April 15, 2021 WRITTEN BY MICHAEL SOLOMON Michael G. Solomon, PhD, CISSP, PMP, CISM, PenTest+, is a security, privacy, blockchain, and data science author, consultant, educator and speaker who specializes in leading organizations toward achieving and maintaining compliant and secure IT environments. Many catastrophic events are obvious, with their effects immediately visible — but not all. Fire, flood, tornadoes and earthquakes are all examples of events that can cause a substantial impact to business operation and do not require any effort to detect. Everyone can see what causes the damage. Cyberattacks can be very different. While some cyberattacks, such as Denial of Service (DoS) attacks, cause interruptions that are immediate and visible, many other attacks are not so obvious. For example, an attack that extracts sensitive customer information likely will not raise alarms and can occur without anyone realizing what happened. Since the first step in responding to a security incident is to identify that an incident has occurred, identification becomes important to survival.  A recent IBM breach report states that companies that are victims of a cyberattack take an average of 207 days to identify the breach. And it takes, on average, an additional 73 days to contain it. Think about that: On average, victims of cyberattacks only realize they have been attacked after half the year has passed. Since many cybercriminals plunder their victims repeatedly after the initial breach, losses can accumulate the longer an attacker goes undetected. A key indicator of how much damage a cyberattack may cause is how soon that attack is detected and stopped. Early breach identification is the single most important action to reduce the blast radius and increase the likelihood of surviving the attack. Let’s look at some ways companies can place controls that provide an early alert of cyberattack activity. Like the canaries coal miners used to carry with them, an early warning of danger can help avert disaster. Manage cybersecurity risk Encountering business interruptions is not a new phenomenon. There are many ways an organization can run over operational “speed bumps” that reduce or completely block its ability to carry out its core business functions. These speed bumps are often referred to as risk. Risk is the probability that something will occur that has either a positive or negative effect. Most risk is perceived as something that may cause loss, but risk can have a positive result, such as finishing a project early. We will only cover negative risk in this article. A proven way to minimize the negative effects of realized risk is to develop plans to handle the risks that can cause the most damage. Of course, that is easier said than done. Ignoring risk is dangerous. But managing it well can be the difference between surviving and succumbing to a realized risk such as a cyberattack. The quality of your plans is directly related to your probability of success. Business Impact Analysis The first plan you will need to combat cyberattacks is a Business Impact Analysis (BIA). A BIA summarizes your business processes and identifies the functions that must be operational for your organization to stay in business. These core functions are called Critical Business Functions (CBFs). Once you have identified your CBFs, you know what you must protect. If any CBF gets interrupted, your business process […]

Read More

Securing Cloud Access in Applications

Published March 31, 2021 WRITTEN BY ED TITTEL. Ed Tittel is a long-time IT industry writer and consultant who specializes in matters of networking, security, and Web technologies. For a copy of his resume, a list of publications, his personal blog, and more, please visit www.edtittel.com or follow @EdTittel As applications become increasingly cloud-based – or even, cloud-native – more and more such code is sending data to and from cloud-based stores, both public and private. This makes the methods and controls that such applications use to access the cloud of particular interest. It also keeps the onus on application owners to protect and preserve application data, particularly when it involves information subject to compliance and regulatory requirements. That brings a host of other concerns into play that range from preserving privacy and confidentiality to the “right to be forgotten” (a GDPR requirement that obliges organizations to dispose of data about any registered individuals within 30 days of request for same, or face fines and penalties). Pass the Data, But Not the Buck Indeed, organizations must realize and own up to their responsibility for data, even when it leaves their hands and goes into the cloud. At best, the cloud service provider will assume a “shared responsibility” for an organization’s data once it hits their servers or data stores. But always, the organization that acquires (and presumably controls and protects) such data remains legally responsible for its privacy, confidentiality, and disclosures of breach, theft, or unwanted access or disclosure. Thus, organizations that use cloud platforms should thoroughly understand the provider’s security capabilities, and any data protection (such as encryption, access control and audit, and so forth) that the provider offers, and what responsibility and liability it assumes for data and applications that run within its systems. Best Security Practices for Cloud Access For cloud-consuming organizations, that’s just the beginning. Best security practices also insist that organizations implement the following principles where access to cloud applications, data, configurations, and resource consumption are concerned: Apply the Principle of Least Privilege (PLP): all access should be set to “deny” by default and only so much access allowed for authorized parties as they need to use an application (ordinary users) or administer the organization’s cloud environments and settings (and all admin level access should be logged, and routinely audited, especially use of privilege, account management, configuration and set-up of applications and data stores, and so forth). Use strong authentication, 2FA or better: Ideally, all access to cloud-based applications and data should require jumping demanding hurdles before access requests get granted. At a minimum, ordinary users should be required to use two-factor authentication (2FA: cellphone or email confirmation of one-time pads). Higher-level access, should probably use multi-factor authentication that includes something beyond 2FA, such as a certificate, smart token device, biometric data (fingerprint, facial scan, and so on), or be tied to a specific admin workstation’s MAC address. Encryption for data in motion and at rest: By default, organizations should turn on and use the strongest encryption they can employ without unduly affecting data access and/or application performance. Data should also be encrypted wherever it’s stored, both at endpoints when used on the client side, and in data stores when in use by an application or truly at idle rest (active or multi-tiered storage repositories). […]

Read More

Getting Ahead of Payment Card Security Threats

Published April 08, 2021 WRITTEN BY MICHAEL SOLOMON Michael G. Solomon, PhD, CISSP, PMP, CISM, PenTest+, is a security, privacy, blockchain, and data science author, consultant, educator and speaker who specializes in leading organizations toward achieving and maintaining compliant and secure IT environments. Payment card attacks are nothing new. Cybercriminals have been targeting payment cards for more than a decade. However, there is a disturbing trend of cybercriminals discovering and leveraging novel ways to steal payment cards credentials during online transactions. Online merchants have long espoused techniques that make online commerce safe, but that assurance is under a new level of attack. Recent advances in payment card attack sophistication up the game for cybersecurity professionals. Protecting online commerce is always challenging, but it can be rewarding and effective. Let’s look at a few ways to stay at least one step ahead of emerging payment card threats. Understanding payment card threats Using someone else’s payment card to steal funds is an attack that has existed as long as payment cards. In the beginning, merchants would use a mechanical device to make an impression of the raised payment card numbers into a set of carbon-copied transaction records. The customer would sign the record and take one copy. A second copy would stay with the merchant, and a third copy would go to a payment processor to settle the payment. The early process was simple, and when the device that created payment card impressions would fail, vigorously rubbing a pen or pencil body over the card would transfer the image to the transaction record. In those days, if you could grab a payment card number and forge the owner’s signature, you could create fraudulent transactions. When online transactions started to become more prevalent, signatures became less important; all cybercriminals needed were elements of a payment card holder’s basic information, such as card number, name and billing address. Intercepting credit card numbers wasn’t very difficult, since encryption wasn’t the norm prior to the early 2000s. But it didn’t take long for the payment card industry to recognize the growing threat to transactions. Several of the biggest payment card industry vendors, including Visa, MasterCard, American Express, JCB International and Discover, joined forces to develop the Payment Card Industry Data Security Standard (PCI DSS). One of the many requirements of the PCI DSS is that all transmissions involving payment card data (and subsequent storage) must be encrypted. PCI DSS increased security and upped the ante for payment card attacks, so the cybercriminals upped their game as well. Now we see a wide range of attacks that focus on intercepting, or skimming, payment card numbers and related data prior to any encryption efforts. The general idea for today’s attacks is to find creative ways to push the attack closer to the point of payment card number acquisition. In the physical world, this led to portable and stealthy physical card skimmers. Card skimmers work by replacing a valid card reader with a device that reads the credit card data and then sends it to an attacker’s preferred repository. Sophisticated skimmers pass the data through to the intended destination to remain undetected for as long as possible. As small battery-powered skimmers became popular, unscrupulous servers at some restaurants began skimming cards with pocket skimmers before processing payment cards properly. (Of […]

Read More

Published March 31, 2021 WRITTEN BY ED TITTEL. Ed Tittel is a long-time IT industry writer and consultant who specializes in matters of networking, security, and Web technologies. For a copy of his resume, a list of publications, his personal blog, and more, please visit www.edtittel.com or follow @EdTittel As applications become increasingly cloud-based – or even, cloud-native – more and more such code is sending data to and from cloud-based stores, both public and private. This makes the methods and controls that such applications use to access the cloud of particular interest. It also keeps the onus on application owners to protect and preserve application data, particularly when it involves information subject to compliance and regulatory requirements. That brings a host of other concerns into play that range from preserving privacy and confidentiality to the “right to be forgotten” (a GDPR requirement that obliges organizations to dispose of data about any registered individuals within 30 days of request for same, or face fines and penalties). Pass the Data, But Not the Buck Indeed, organizations must realize and own up to their responsibility for data, even when it leaves their hands and goes into the cloud. At best, the cloud service provider will assume a “shared responsibility” for an organization’s data once it hits their servers or data stores. But always, the organization that acquires (and presumably controls and protects) such data remains legally responsible for its privacy, confidentiality, and disclosures of breach, theft, or unwanted access or disclosure. Thus, organizations that use cloud platforms should thoroughly understand the provider’s security capabilities, and any data protection (such as encryption, access control and audit, and so forth) that the provider offers, and what responsibility and liability it assumes for data and applications that run within its systems. Best Security Practices for Cloud Access For cloud-consuming organizations, that’s just the beginning. Best security practices also insist that organizations implement the following principles where access to cloud applications, data, configurations, and resource consumption are concerned: Apply the Principle of Least Privilege (PLP): all access should be set to “deny” by default and only so much access allowed for authorized parties as they need to use an application (ordinary users) or administer the organization’s cloud environments and settings (and all admin level access should be logged, and routinely audited, especially use of privilege, account management, configuration and set-up of applications and data stores, and so forth). Use strong authentication, 2FA or better: Ideally, all access to cloud-based applications and data should require jumping demanding hurdles before access requests get granted. At a minimum, ordinary users should be required to use two-factor authentication (2FA: cellphone or email confirmation of one-time pads). Higher-level access, should probably use multi-factor authentication that includes something beyond 2FA, such as a certificate, smart token device, biometric data (fingerprint, facial scan, and so on), or be tied to a specific admin workstation’s MAC address. Encryption for data in motion and at rest: By default, organizations should turn on and use the strongest encryption they can employ without unduly affecting data access and/or application performance. Data should also be encrypted wherever it’s stored, both at endpoints when used on the client side, and in data stores when in use by an application or truly at idle rest (active or multi-tiered storage repositories). […]

Read More