Security Ranch Security Ranch

February 15, 2021

Navigating the Digital Landscape: Understanding and Defending against DDoS attacks

Filed under: Uncategorized — Tags: , — Ken @ 8:14 pm

Navigating the Digital Landscape: Understanding and Defending Against DDoS Attacks

In the ever-evolving realm of cyber threats, the past year has witnessed a surge in attacks on major U.S. corporations. From Target to Home Depot and JP Morgan, these incidents underscore the need for robust cybersecurity measures. The recent Sony cyber attack, starting as a DDoS assault and escalating into a ransomware nightmare, exemplifies the escalating threats faced by businesses.

The Surge of DDoS Attacks and Their Economic Impact

Over the past year, Distributed Denial of Service (DDoS) attacks have become a prevalent and cost-effective weapon in the digital arsenal, constituting 11.7% of reported attacks in November 2014. As online transactions proliferate, businesses face an increasing risk of disruption. Understanding the anatomy of DDoS attacks is crucial in fortifying digital defenses.

Unmasking DDoS: The Digital Assault Explained

At its core, a DDoS attack aims to deny access to networks or computers, rendering them unusable for legitimate users. Orchestrated by a botnet—a network of compromised computers—these attacks employ bots infected by malware, such as worms or viruses, acting as the foot soldiers in the digital theater.

The World of Malware: Worms, Viruses, and Trojan Horses in Focus

Worms and viruses, akin to digital parasites, infiltrate computers using various tactics. Viruses infect files and replicate, spreading through shared drives, USB devices, or email attachments. Trojan horse attacks disguise as legitimate programs, waiting to strike when executed, infecting unwitting systems.

Defending Against the Digital Onslaught: Tips for Users

For individuals, defending against becoming an unwitting bot involves using robust antivirus software regularly updated to fend off evolving threats. Vigilance in email interactions and cautiousness with attachments are crucial shields against infiltration. Recognizing the signs of a compromised computer—unexplained high CPU usage or unauthorized email activity—is paramount for personal cybersecurity.

Breaking Down DDoS: Crash and Flood Tactics Demystified

DDoS attacks fall into two main categories: Crash/Logic attacks and Flood attacks. Crash attacks exploit vulnerabilities in operating systems or configurations, attempting to bring the system down. Flood attacks inundate servers with meticulously designed requests, overwhelming their resources.

Layers of Assault: From ICMP to HTTPS in the DDoS Landscape

ICMP attacks at Layer 3 overload networks with erroneous messages, exploiting vulnerabilities to cripple systems. Smurf attacks amplify their impact by causing every device on a misconfigured network to repeat the attack. Reflective attacks, like a cyber echo, exponentially increase the number of packets sent to the target, further inundating the victim.

HTTP and HTTPS GET Flood attacks, operating at Layer 7 (Application), flood servers with file or picture requests, consuming resources. SYN Flood attacks, at Layer 5 (Session), exploit half-open connections, bogging down systems by never completing the three-way session synchronization.

Mitigating the Storm: Defense Strategies and the Economic Impacts

Mitigating DDoS attacks demands advanced strategies, often beyond the reach of the average user. For individuals, installing reputable antivirus software and keeping it updated, along with firewall activation on routers, form the first line of defense. Large corporations, however, must deploy custom-built firewalls, capable of thwarting attacks across multiple layers of the OSI and TCP/IP models.

The economic fallout from DDoS attacks is staggering, with network downtime costing an average of $22,000 per minute, soaring up to $100,000. Large e-commerce sites can face daily losses exceeding $30 million. The aftermath extends beyond financial loss to reputational damage, potential litigation, and regulatory fines.

Sony’s Catastrophe: Lessons from an Advanced Persistent Threat

The recent assault on Sony by the enigmatic Guardians of Peace exemplifies an advanced persistent threat. Hacking into Sony’s servers, stealing sensitive data, and holding it hostage for ransom showcased a level of sophistication and malicious intent that transcends typical DDoS attacks. The fallout, from leaked emails to reputational damage, underscores the severity of such orchestrated campaigns.

The Ongoing Battle: Building Resilience for the Future

As society becomes increasingly reliant on the internet, the frequency and sophistication of DDoS attacks are set to rise. Building resilience against these attacks becomes imperative, ensuring that businesses can weather the storm and continue operations even in the face of digital onslaughts.

In the evolving landscape of cyberspace, one thing is certain: the battle against DDoS attacks is not a matter of if, but when. Fortifying our defenses and implementing robust business continuity and disaster recovery plans will be essential in navigating the digital frontier, where the only constant is change.

What is Cyber Threat Intelligence

Filed under: Uncategorized — Tags: , — Ken @ 8:13 pm

What is Cyber Threat Intelligence

            When one thinks of intelligence, they usually think about the military and intelligence agencies like the CIA or the Marine Corps.  If it sounds militant, you are not too far off.  Cyber intelligence or Threat Intelligence uses some of the same methods and procedures to defend networks as the intelligence agencies use to defend our country.  The driver for the recent popularity of cyber threat intelligence is the increase in advanced persistent threats (APT).  APT can loosely be defined as a category of attacks where a group or person is specifically targeting business.  These attacks can combine different methods to gain access to a business’ networks.  “Script kiddies” or simple hacks are usually simple attacks to use a single method to do one or two things on a network.  Vandalizing a website or DoS’ing a network can fall in this category.

Today, threats from hackers on the internet are growing in complexity, scale, and number.  The defenses that we used to protect our computers and networks in the past usually started by countering an already existing threat.  The standard model for defending against cyber-attacks is the monitor and respond strategy.  This usually entails collecting as much information as possible from as many resources as possible to create best configurations as possible to beat the threat.  The problem with this strategy is that it is reactive.  By the time that the IT staff discover that their configurations were ineffective the attack has already happened.  Once the attack happens, an investigation will be conducted to come up with a new configuration or ACL that will hopefully stop that type of attack from occurring again.  That reactionary method of developing defenses is inadequate for networks today.  Developers and engineers just cannot keep up with the evolving threat coming from hackers.  Even using the standard risk analysis can fall short because it can only conduct assessments on known vulnerabilities.  How can you defend your networks against an attack that you have not seen yet?  The answer is cyber threat intelligence.

So, what is cyber threat intelligence?  To answer that we first have to define intelligence.  Intelligence can be defined as the product resulting from the collection, processing, integration, evaluation, analysis, and interpretation of available information concerning foreign nations, hostile or potentially hostile forces or elements, or areas of actual or potential operations (Joint Pub 2-0).  This definition works well, and with a little imagination, one can understand what cyber intelligence would be.  Foreign nations could be nation/state sponsors cyber-attacks.  Hostile forces or elements could be hackers.

In 2002, Donald Rumsfeld gave a Department of Defense (DoD) briefing introducing the concept of “knowns.”  There are essentially three types of “knowns” that you could have about something.  There are known knowns, known unknowns, and unknown unknowns.  (Rumsfeld 2002).  Known knows are things that we know that we know.  An example is what a cyber-attack is and how to defend against it.  There are known unknowns such as the assume breach concept.  We know that we are eventually going to get hacked, we just don’t know when or how.  Lastly, there are unknown unknowns.  These unknowns are where we do not know what types of attacks are out there and we do not know when or how they will happen.  A good example for this is zero day attacks.  We do not know what is an attack is or how and when it is going to happen.  Think back to most of the significant data breaches in the past.  Most of those attacks happened over the course of months, and the victims never knew that they were even hacked.  Cyber Threat Intelligence acts to move as many unknown unknowns into the known unknown’s category.  To do this, cyber threat intelligence fills the defense gap by analyzing and sharing information.

One way cyber threat intelligence attempts to solve the unknown unknowns by the exchange of information.  Thinking back to traditional intelligence agencies, the spies usually try to sneak around to find out information to give back to their country.  That country uses the information in a variety of ways, but mainly it is to clear the “fog of war” or unknown unknowns, to be able to make better decisions.  This analogy works the same way in the cyber world.  The problem to ask yourself in the realm of cyber security is who are my adversaries and what information do they likely want.  By asking this question, you can start to narrow down and focus your efforts.  Most businesses do not have an infinite amount of money and time to secure their networks.

When sharing information with other organizations, it is important to establish and maintain a consistent format.  By doing this, an organization can more easily find what they are looking for.  Different threat information types should be formatted in a way that makes it easy for a user to take action on.  There are five main data types.  These are Indicators, Tactics, techniques and procedures (TTPs), Security Alerts, Threat intelligence reports, and Tool configurations (Johnson, 2016).

Two reasons why companies may choose not to share information are that they do not believe that they have any information that would be considered valuable to other businesses.  The second reason is that some firms do not want to assist or help their potential competition (Chismon, 2015).

Threat indicators or Indicators are technical data that can suggest an attack can happen or is already going on (Johnson, 2016).  These indications can be anything from known harmful or malicious IP addresses to suspicious URLs that can indicate malicious activity.  By sharing this type of information on Threat Intelligence clearing houses a company can help other business by sharing what they know.

TTPs are the actions that a hacker usually takes on a network (Johnson, 2016).  Tactics are the high-level behaviors that hackers take.  Techniques are the specific steps that hackers do on a network to gain unauthorized access.  An example of this is using Metasploit to drop malware onto a target.  Procedures are the actual steps used to conduct the attack.

Security Alerts are advisories or notifications about specific vulnerabilities, exploits, or other security concerns given by organizations to the general public (Johnson, 2016).  One of the first organizations to provide security alerts to the public is the United States Computer Emergency Readiness Team (US-CERT).  This organization was created after the Morris Worm wreaked havoc on the internet in the US.  Other important organizations are the National Vulnerability Database (NVD), or Microsoft Security Bulletins from Technet.

Threat Intelligence reports are reports that inform about TTPs, hackers, or case studies of attacks that can help inform a company on to secure its networks (Johnson, 2016).

Lastly are tool configurations.  These are reports that contain the software or equipment setting used to defend against attacks or what the configurations were when an attack occurred (Johnson, 2016).  This report could also be used to instructing someone in how to use AV software or how to remove malware once a computer is infected.

In the United States, Marine Corps (USMC) information is shared all of the time about field exercises or project.  The lessons learned are put in reports that get published in the USMC’s Center for Lessons Learned website.  These lessons provide anyone who is interested in what went right or wrong for different events.  The same applies to information sharing for cybersecurity.  When companies exchange information with other businesses, there is a shared awareness among them.  This awareness is for events like DDoS attacks or the after effects from a Business Continuity point of view.  Situational awareness can be very valuable information for companies that have not had to deal with network outages to learn from.  Information sharing can also increase the security posture if companies pay attention.  Just like a rising tide raises all boats sharing information can improve security.

A report by the SANS Institute indicated that companies that used threat intelligence saw a 28% better context, accuracy and speed in monitoring and incident handling (Shackleford, 2015).  A 51% faster and more accurate detection and response and a 48% reduction in incidents thru early prevention due to Cyber Threat Intelligence (CTI) (Shackleford, 2015).  Unsurprisingly, the top user of CTI is the U.S. Government.  At the federal level, cooperation between the military and the government have cross-pollinated experience, and both groups have benefited.

One potential weakness with CTI is being overwhelmed with information and not knowing how to use and integrate it.  To help with the understanding several different formats and frameworks have been created to help in identifying the information and putting it in a readable form.  According to the SANS report, the most popular format is the Open Threat Exchange (OTX) with 51% of companies responding that they use that framework.  OTX has almost 26,000 users in 46 groups.  Each report in the OTX shows over 929,000 indicators from bad IP addresses to malicious URLs.  The OTX can be accessed by going to the Alienvault website.

Another popular framework is the Open Indicators of Compromise (OpenIOC) framework.  OpenIOC is a framework created by Mandiant that contains tools to edit and create Indicators of Compromise (IOC).  These indicators are the artifacts that are left behind by an attack.  Companies that use the framework can create an XML document that put these indicators in a logical format that can be used to adjust the configurations of firewalls, IDS/IPS, and other investigative tools.  The standard life cycle of creating IOCs begin with an initial lead or evidence.  This could come from a notification from law enforcement or from an anomaly that was detected by a network device.  After the initial discovery, IT personnel create the IOC from their existing evidence and the environment of the network.  Once the IOC is created, it is deployed to the network.  Deploying the IOC can cause changes to the networks ACLs, blacklisted URLs or IP addresses, or other signatures that can alter the IDS/IPSs.  After deploying the IOC on the network additional information and indicators can be included if anything new was discovered during the investigation.  When the new evidence is included in the IOC, the evidence can be further analyzed to refine, enhance, or create additional IOCs.

There are two ways that companies can begin to add CTI to their network security practices.  The first way is to build and grow an intelligence cell from scratch.  The benefits to companies creating their CTI cell are that they can stay at the leading edge of the security threats.  Because it takes some time to investigate and create intelligence after an attack, companies would likely alter their networks before finalizing any IOCs to publish.  One major drawback from creating a CTI cell are the cost.  Cost can be significant depending on the size and experience of the intelligence unit.  For most smaller businesses, this would be unrealistic despite the need.  An alternative to starting their CTI cell would be to subscribe to a managed security service that provides reports and intelligence.  This can be a more cost-effective way for small and medium companies to leverage the experience of a larger company.  FireEye, Dell SecureWorks, and Symantec are three companies that can provide managed CTI.  These businesses can all provide feeds of information that are constantly being updated.  The prices for this service can vary from $2000 to $3000 per month for a single feed to $100,000 for a 12-month subscription for 1 to 2500 computers (Tittel, 2015).  Companies that are thinking about either one of these options should conduct and risk assessment and analysis what the return on investment would be to make sure that the price is worth it.

In a large enterprise, cyber threat intelligence will usually fall in a Network Operations Center (NOC) or Security Operations Center (SOC).  These teams serve two different purposes, but sometimes they can be combined depending on the size and budget of the organization.  Larger organizations like government entities usually have separate teams because of the potential for a conflict of interest.  You would not want the same system administrators that are responsible for keeping the network running also responsible for auditing the logs for example.  Threat intelligence will normally fall within the SOC teams’ responsibility.  A SOC team can be responsible for tasks that include risk analysis, IDS/IPS analysis, and threat intelligence.  Because there are so many different threats in cyberspace and only so much money to go around risk analysis is the process of discovering all of the vulnerabilities that lie on a network and prioritizes them from the most severe to the least severe.  The budget should prioritize to reduce the most severe risks so that the business can get the most “bang for the buck” that they can.  If these risks are not prioritized correctly, the company could be wasting money trying to reduce a risk that would have no real impact on the security of their network.  This is where threat intelligence can help prioritize the risk.  By sharing information with others, each business can use the information about the controls that other companies took and analysis how their effects.  If the controls that were implemented were affected and there was a significant reduction in attacks, then that information could be used to help security the network.  If the controls were ineffective that company could still learn what controls were least efficient and find an alternative control.  Each time companies share information on attacks, controls, or investigations everyone can benefit from the shared knowledge.  By using threat, intelligence information sharing makes the risk analysis or effective and efficient.

As with business and the military, they both operate on three different levels.  The bottom level is the tactical level.  For IT roles, this level is responsible for monitoring the network and managing the users and upgrades and patches.  At this level, one of the problems is the number of tasks that need to be completed.  It is often difficult to test and manage patches while simultaneously scanning logs and ensuring that the normal users have access to their accounts.  CTI can help at this level by contributing to prioritize the efforts of the IT staff to make sure that their efforts will have the most benefit on the networks (Friedman, 2015).

The next level is the operational level and includes the incident response teams and the forensic teams.  A problem at this level is that it can be time-consuming and difficult to investigate and attack and to contain the damages of further breaches.  CTI can again help prioritize the efforts of the investigating staff and provide case studies and indicators that they can use to speed up their processes (Friedman, 2015).

The top level for businesses is the strategic level.  This is the level that the Chief Information Security Officer (CISO) and other C-suite executives work at.  One of their problems with IT security is that they often lack a technical understanding of the issues and with that lack of understanding have a difficult time prioritizing funding for investment in new or expanded technologies and tools.  CTI can help these executives prioritized their money on the most likely threats and gave the company the most bang for their buck in stopping the most dangerous and likely attacks (Friedman, 2015).

The future is most likely pretty bright for Cyber Threat Intelligence.  It should begin to play an even larger role as Artificial Intelligence (AI), and machine learning starts to expand into more industries.  The usual methods of adding more hardware to the network are starting to have less of an impact in keeping the networks secure.  Cyber Threat Intelligence plays a role in filling the defense gap by sharing information and analyzing previous attacks to help prevent more attacks from occurring.

References

Shackleford, Dave.  (February 2015).  Who’s Using Cyberthreats Intelligence and How?  Retrieved from https://www.sans.org/reading-room/whitepapers/analyst/who-039-s-cyberthreat-intelligence-how-35767

Lord, Nate.  (October 2016).  What is Threat Intelligence?  Finding the Right Threat Intelligence Sources for Your Organization.  Retrieved from https://digitalguardian.com/blog/what-threat-intelligence-finding-right-threat-intelligence-sources-your-organization

Rumsfeld, Donald.  (February 2002).  DoD News Briefing – Secretary Rumsfeld and Gen. Myers.  Retrieved from U.S. Department of Defense Web site: http://archive.defense.gov/Transcripts/Transcript.aspx?TranscriptID=2636

Department of Defense. (2013).  Joint Publication 2-0 Joint Intelligence.  Washington D.C. USDOD.

Johnson, Chris. Badger, Lee.  Walternire, David.  Snyder, Julie.  Skorupka, Clem.  (October 2016).  Guide to Cyber Threat Information Sharing.  Retrieved from http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-150.pdf

Shackleford, Dave.  (February 2015).  Who’s Using Cyberthreat Intelligence and How?.  Retrieved from https://www.sans.org/reading-room/whitepapers/analyst/who-039-s-cyberthreat-intelligence-how-35767

Chismon, David.  Ruks, Martyn.  (2015).  Threat Intelligence:  Collecting, Analysing, Evaluating.    Retrieved from https://www.ncsc.gov.uk/content/files/protected_files/guidance_files/MWR_Threat_Intelligence_whitepaper-2015.pdf

Friedman, Jon. Bouchard, Mark.  (2015).  Definitive Guide to Cyber Threat Intelligence.  Retrieved from https://cryptome.org/2015/09/cti-guide.pdf

Tittel, Ed.  (April 2015).  Comparing the top threat intelligence services.  Retrieved from http://searchsecurity.techtarget.com/feature/Comparing-the-top-threat-intelligence-services

Firewall: A small piece of the security puzzle.

Filed under: Uncategorized — Tags: — Ken @ 8:10 pm

Firewalls: A Small Piece of the Security Puzzle

            The first firewalls were invented in the late 1980s as a reaction to the world’s first malware.  The infamous Morris Worm struck the internet on November 2, 1988, after Robert Morris release a program that tried to figure out the size of the web (Bortnik 2013).  The program contained a small error that ended up making it act like a worm and ultimately took down ten percent of the known internet at that time.  Later, two computer scientists released the first paper that would describe a packet filtering program that would later become known as a firewall.

What is known as first generation firewalls were simple programs that filtered packets by a set of rules that would either pass, drop, or reject packets based on those standards.  Passing the packets allowed the traffic to enter the network.  Dropping packets stopped packets from entering and rejected packets would send an error message back to the sender.  The rules allowed traffic based on protocols, ports, or the destination/source IP address.  These firewalls were “stateless” firewalls.  “Stateless” means that the firewalls inspected each packet individually without any regards to the stream of information coming from a connection.  So, if a connection was established between two networks and ten thousand packets were sent from one to another, ten thousand packets would be inspected.  This presented a problem that with the increasing size and complexity of the internet firewalls would have to be more powerful to inspect the increased traffic.  This lead to the creation of the “stateful” firewalls.

“Stateful” firewalls are known as second generations firewalls.  These firewalls could operate at Layer 3 and 4 of the OSI model and retain information about the state of the packets and determine if the packet is from an existing connection or if it was from a new connection.  This lead to more complex rules that allow the firewalls to be more efficient and effective.  It also allowed the firewalls to defend against several types of DoS attacks that were beginning at that time.

Third generation or Application Firewalls have been introduced since the early 1990s.  These firewalls are even smarter than their counterparts because the firewalls have knowledge of what applications use what protocols and ports.  For example, if I was using File Transport Protocol (FTP) on anything other then port 21 then the firewall could drop the connections unless I had a rule in place that said otherwise.  This is what makes the firewalls smarter and can help prevent even more attacks on the network.

Now, the Next Generation Firewall (NGFW), is the latest and greatest of firewall technology.  This is still a third-generation application level firewall, but the firewall can inspect packets at a deeper level than before.  This is where the firewalls start to become specialized to become Web Application Firewalls (WAF) or Intrusion Prevention Systems (IPS).  In 2007, Palo Alto produced a white paper that stated the 80% of new attacks are attacking weaknesses in applications (Bouchard, 2007).

The strengths of a firewall lie in its ability to controls traffic coming into and out of the network.  This allows a business to allow or restrict exactly what they want going on to the next.  Users can be allowed certain permissions and privileges based on their job or locations.  This helps enforce a company’s network or security policy.  This, in turn, makes managing the network even easier and more secure.  Another strength is that the firewall has a single purpose.  This makes the program more user-friendly and efficient because it only has to do one thing and one thing well.  If a program had to be a firewall and program to create and manage user accounts, it would likely be less efficient at both.  That is why there are so many different types of programs that have a single purpose.  However, despite these tools companies are still getting hacked into.  In one FireEye study the networks that are supposed to be the most secure, government and military, were proven to have a breech score of 76% (Dunn 2015).  That is pretty bad.  This is due to many reasons, but for firewalls, they are not hack-proof.

Firewalls do have several weaknesses that have evolved over time.  With the first-generation firewalls, they were very straightforward and could only inspect packets based on the port, protocols, or the source/ destination.  This worked well but could not stop malicious network traffic that exploited the firewalls simplicity.  As an example, if someone wanted to send packets spoofing protocols or ports the packets would not be filtered and pass thru the firewall.  This is where the second-generation firewalls came in.  These firewalls filtered the packets based on the connection or session and remembered information about the connection.  This cut down on previous weaknesses, but these firewalls became targets of DoS attacked the fact that these firewalls could remember things about the state of the connections and would try to fill up the firewalls memory to overload it and make it fail.

Other weaknesses that firewalls have are things that have nothing really to do with the firewalls themselves.  One thing that firewalls cannot protect against is the insider threat. Users could give themselves greater access than what they need and be a threat to the network.  This could be a malicious user that wants to damage the network or the business or a user that is ignorant of the threats out on the internet.  Another weakness is that the firewall cannot protect against anything that has already passed thru it.  If malware gets passed thru into the network and causes damage, there is nothing that the firewall can do to fix the problem.  This mostly can take the form a malware that is sent via email.  A third weakness is the fact that firewalls cannot protect against networks that are poorly structured or configured improperly due to bad security policies (Laverty, n.d.).  Firewalls can only filter network traffic that passes thru it and only filters based on the rules that network administrators set.  Firewalls can be useless if the rules are configured to pass everything thru into the network.

To combat these weaknesses other tools and techniques must be used.  Intrusion Detection Systems are devices or software that analyses network traffic and look for known events or certain types of traffic.  Once it recognizes an event, a log message will be recorded.  Intrusion Prevention Systems work the same way but instead of only logging a message it can also react to block or take some other preprogrammed action.  Intrusion Detection Systems (IDS) and Intrusion Prevention Systems (IPS) are very similar in how they function.  In fact, some IDS can be changed to an IPS just by changing a drop down menu from Log to Log/Drop (Pack, 2013).  There are two broad types of IPS/IDS.  The first category is known as Network IDS or NIDS.  The second types are Host IDS or HIDS.  NIDS are devices that can be installed on the network.  They are usually positioned just behind the firewall so that any traffic that passes can get analyzed for certain signatures.  HIDS are software bases and run on individual computers or host that can monitor traffic coming and going from that computer only.

HIDS and NIDS work by detecting network traffic.  There are two main types of detection methods.  The first method is signature based.  Signature based HIDS/NIDS look for known signatures of malicious threats such as worms, virus, and Trojans.  This malware produces signatures in the way that the packets are sent across the network.  In fact, these signatures work the same way as anti-virus (AV) software.  AV software scans for known signatures of malware.

The second method of detection is anomaly-based detection.  Anomaly based detection is a little more complicated because it requires setting a baseline of network traffic that it can base its logic on.  So, for example, if there is only network traffic during the working hours from 8 am to 5 pm then any traffic occurring outside of those working hours would either get logged in in the case of IDSs or blocked in the case of IPSs.

Many businesses prefer IPSs for their networks for their ability to stop malicious traffic, but they do have some drawbacks that would be IDSs ideal.  Because IPSs can block traffic, they can sometimes block innocent traffic or traffic that has malformed packets.  This would be known as a “false positive” and can lead to frustration if the network traffic is essential.  Depending on the business and its security policies it may be preferable to have IDSs instead.  A good example of a company that might want an IDS over an IPS would be an online seller of goods.  If internet traffic cannot get to its website, because it gets blocked, the business loses money.

Another weakness the firewalls have that IDSs and IPSs also have trouble with attacks from within and social engineering attacks.  It is often said that the weakest link in the network is always the human.  Many of the largest attacks on businesses have come from the insider who wants to cause damage to the system or business or from social engineering attacks that come in the form of phishing email attempts or innocent phone calls.  In 2014, Sony was the target of a disgruntled employee that wreaked havoc on their networks and caused substantial damage both monetarily and reputation (Schneier 2014).  In the 2016 elections the Democratic National Convention (DNC) was compromised when one of their campaign leaders replied to a phishing email that asked him to reset his password   (Biddle 2016).  Extensive training must happen to train users that have computer access.  This can help prevent the easy attacks from happening, but the most advanced attacks are almost impossible to recognize due to the sophisticated nature of how the attacks are designed.  The advantage is always on the attacker’s side because the attacker has the time and the knowledge to put the attack together.  In one scenario, a penetration tester was trying to find a way to hack a high-ranking vice president of a company that did not have a large internet footprint.  What the pentester ended up doing was search the internet for any email addresses that the vice president used.  It turned out that the VP used his companies email account to register for a stamp collecting forum.  What the pentester did was create a story and a website about his grandfather passing away and wanting to sell the stamps.  He emailed the VP and told him the story and left the URL link in the email that leads to his website.  What the VP did not know was that there was hidden malware embedded in the website link that allowed him access to the VP’s account (Hadnagy 2010)

The third weakness that firewalls have are its inability to work on a poorly designed network.  The means that care and thought must be used while designing the network topology.  Often when businesses are expanding, they fail to properly scale their networks the account for the larger number or devices or host.  One way to design networks is to use the zero-trust model.  Traditionally, traffic that occurred within a network was considered trusted because it came from within.  So, if one host copied files from a server it never passed thru any firewalls or got filtered.  Now, with the increasing threat from insiders stealing a company’s Intellectual Property (IP) or a hacker that compromises a single computer, considerations must take place that can filter the traffic within the network.  This is how the zero-trust model came to be.  Forrestor Research working with NIST helped designed the concept of the zero trust model to help defend the threats that the traditional network designed prevent (Covington 2015).  This model uses network segmentation to work and assumes all traffic on the network is untrusted.  Least Privilege, and developing a strict access control list are what makes this model effective against the insider threat.  This also means that the network needs more firewall devices within the network itself.  Instead of traffic passing from one department to another without going thru a firewall, the departments should be segmented so that any network traffic that leaves that department will have to pass thru a firewall before it reaches its destination.

This paper covered the basics of firewall and explained many of the strengths and weaknesses.  It also covered the best ways to overcome those weaknesses so that the network stays more secure.  The threats to networks are always evolving and adapting and so the solutions to that threat must also evolve.  The days of just installing a simple firewall and calling it a day are long gone.  91% of companies report the firewall are still a major part of their networks (Chickowski 2016).  However, that report also acknowledges that 61% of businesses use other tools in addition to firewalls.  Now companies must consider the most sophisticated threat along with threats from the inside of their networks.  Coupled that with the increasing use of cloud computing and it becomes evident that smarter tools are needed (Cidon 2015).  The only way to mitigate those threats is to use all of the tools that are available, use smart network design, and develop a well-educated employee.

References

Laverty, Shea. (n.d.)  The Disadvantages of a Firewall.  Retrieved from http://smallbusiness.chron.com/disadvantages-firewall-62932.html

Chickowski, Ericka.  (March 28, 2016). Like It Or Not, Firewalls Still Front And Center.  Retrieved from http://www.darkreading.com/perimeter/like-it-or-not-firewalls-still-front-and-center/d/d-id/1324866

Dunn, John E.  (January 13, 2015).  Traditional defenses not stopping breaches claims real-world FireEye study.  Retrieved from http://www.csoonline.com/article/2868054/data-protection/traditional-defences-not-stopping-breaches-claims-realworld-fireeye-study.html

Cidon, Asaf.  (June 10, 2015)  Why the Firewall is Increasingly Irrelevant.  Retrieved from http://www.darkreading.com/endpoint/why-the-firewall-is-increasingly-irrelevant/a/d-id/1320800

Bouchard, Mark.  (2007).  Next Generation Firewalls: Restoring Effectiveness Through Application Visibility and Control.  Retrieved from http://www.advantel.com/wp-content/uploads/2016/01/next-generation-firewalls.pdf

Pack, Scott. (November 2013). Difference between IDS and IPS and Firewall.  [Msg 1].  Message posted to http://security.stackexchange.com/questions/44931/difference-between-ids-and-ips-and-firewall

Schneier, Bruce. (December 2014). Lessons from the Sony Hack.  Retrieved from https://www.schneier.com/blog/archives/2014/12/lessons_from_th_4.html

Covington, Robert.  (July 2015).  Throw out the trust, and verify everything.  Retrieved from http://www.computerworld.com/article/2944794/network-security/throw-out-the-trust-and-verify-everything.html

Biddle, Sam. (December 2016).  Here’s the Public Evidence Russia hacked the DNC – It is not enough.  Retrieved from https://theintercept.com/2016/12/14/heres-the-public-evidence-russia-hacked-the-dnc-its-not-enough/

Hadnagy, Christopher. (2010). Social Engineering: The Art of Human Hacking.  Indianapolis, Indiana. Wiley Publishing, Inc.

Bortnik, Sebastian. (November 2013).  Five interesting facts about the Morris Worm (for its 25th anniversary).  Retrieved from http://www.welivesecurity.com/2013/11/06/five-interesting-facts-about-the-morris-worm-for-its-25th-anniversary/

PCI DSS v3

Filed under: Uncategorized — Tags: — Ken @ 8:09 pm

PCI DSS v3

            Payment Card Industry Data Security Standard (PCI DSS) is the credit card industries self regulated system of rules and regulations that provide for better information security.  The standard was created in September 2006 by the five major credit card brands; Visa, MasterCard, Discover Financial Services, JCB International, and American Express.  Prior to the creation of the first PCI DSS standard there was a need to get standard in place that all merchants could use to help them keep Personnel Identifiable Information (PII) secure.  Before it was more difficult to comply with requirements from the different credit card vendor because each vendor had their own standard and they did not match each other.  Because of this confusion the five major vendors created the Payment Card Industry Security Standards Council (PCI SSC).  The PCI SSC brought together all of the requirements from vendors and created the first standard known as PCI DSS v1 (What is PCI SSC).

A important note for the PCI DSS standard is that it was created by the credit card vendors and not necessarily an entity itself.  If you had questions or wanted to change something about the standard you could not change it from the PCI SSC.  You would have to go through one of the credit card vendors to get your questions answered because the Council is made up of the vendors and not a separate body (What is PCI SSC).

The purpose of this paper is to identify what the PCI DSS v3 standard is and if this is the right standard to have as an industry.  To begin we will discuss what is required to be compliant.

The basic frame work of the PCI DSS is six categories that are further subdivided into twelve basic requirements.  The figure below describes the basic overview of the standard.

  1. Install and maintain a firewall configuration to protect card holder data.  This is the first requirement of the PCI DSS.  The basic premise is to install a firewall onto your network that prevents any unauthorized users from gaining access to the network and also to allow all authorized users to have access to that same network.  A firewall must be installed and configured between all untrusted and trusted portions of the network.  Implicit to this is that a Demilitarized Zone (DMZ) must be established that allows the public to view a web server but be denied access to database servers that reside on the same network.  For an effective firewall an inventory of the entire network to include all nodes, protocols, and applications must be accounted for so that only the ports and protocols needed can be allowed on the network.  Care must also be taken to ensure that no PII data is stored between the DMZ on the web servers.  The overall goal is to keep unauthorized users from gaining access to the trusted networks where the PII data resides and to ensure no PII data is stored on untrusted networks (PCI DSS requirements)
  2. Do not use vendor supplied defaults for system passwords and other security parameters. This is a basic security function but unfortunately is often overlooked.  The basic sub-requirements are to change or remove all default passwords and account user names.  For example for administrative accounts the user name should not be “admin”.  Most of the default passwords and user names are all public information that is often easily discover but using the Internet.  Implicit to this requirement is to restrict servers to one role only.  If you have a web server it should only be a web server.  Not a web server and a database server.  The purpose of these steps are to reduce the attack surface of each server.  Care should also be taken to remove and features or applications that are not needed.  This also reduces the attack surface and follow the basic security principal of least privilege.  Using encrypted protocols is another requirement.  Instead of using FTP use SFTP or https for http.  This keeps the data in transit secure (PCI DSS requirements).
  3. Protect stored card holder data.  This requirement essentially protects card holder data at rest.  A robust Encryption Key Management System (EKMS) must be in place that manages encryption keys, how they are stored, how they are disposed of, enforcement of cryptoperiods.  Policies should be put in place to replace passwords for card holders and employee accounts when known data breeches occur or when encryption keys are weakened in any way.  Specific policies must be in place to decide what card holder data is stored, how long, and for what purpose.  If card holder data is not needed to verify the transaction or id then it should not be collected or stored.  The CVV code that is located on the back of most credit cards also must not be stored.  Following the principal of least privilege you should limit the number of employees that have access to card holder data or encryption keys.
  4. Encrypt transmissions of card holder data across open, public domains.  The use of encryption and secure protocols must be used with data in transit.  This is closely related to the second requirement but also includes the data that interacts with the Point of Sales (POS) terminals and the servers.   Secure protocols such as IPSec, TLS/SSL, and SSH must be used and only trusted certificates must be accepted.  This requirement includes all types of networks to include wireless.  Only WPA2 or WPA can be used.  WEP is prohibited because it is too easily broken with todays technology (PCI DSS requirements).
  5. Protect all systems against malware and regularly update anti-virus software    or programs.  Plan and policies must be put in place to regularly update and scan all servers, workstations, and POS systems for malware and viruses.  Anti-virus software must be able to identify, quarantine, and remove any malware that is identified.  IT departments must also monitor for new exploits and vulnerabilities that are discovered by security researches and anti-virus software vendors (PCI DSS requirements).  Depending on the size of your business it might be smart to run two or more different anti-virus application because each vendor uses a different virus definition database and they update the database at different times (Solomon 2011).
  6. Develop and maintain secure systems and applications.  Consistently monitoring software applications for updates and actually updating them will be known vulnerabilities to a minimum.  Also keeping your Operating System (OS) updated will reduce the number of known exploits from being used for an attack.  If custom software applications are used then they must be tested for any vulnerabilities.  Known best practices in software programming must be used to keep buffer overflow, cross-site scripting (XSS), error handling etc. at bay.  While programming and testing the testing data must be removed from the application to prevent the data from being used in an exploit.  At least annually known vulnerabilities must be patched to keep applications free from exploits n(PCI DSS requirements)
  7. Restrict access to card holder data by business need to know.  This is a basic security principal.  The principal of least privilege is the process of identifying what access employees or users need and only giving them the minimum access and permissions for them to do their job.  At the employment level all positions must have active roles, permissions, and access rights defined and documented.  Using Microsoft Windows Active Directory makes this process pretty easy.  You can also create profiles using the Group Policy Object (GPO) Management editor to create group permission and access rights depending on what the employee’s jobs are (Solomon 2011).   Periodically these rights must be audited and verified that they are correct and that excess rights were not issued.  If extra rights are given they must also be given an expirations date.  Access Control List (ACL) must be documented with the default setting on employee accounts set to deny.  This helps control the least privilege principal and makes the IT department have to grant permissions instead of having to specifically deny permissions (PCI DSS requirements).  A good practice for administrative accounts would also be to restrict privileged accounts from having access to the Internet and possibly email.  This would require all IT personnel to have two or more accounts but it would help improve the least privilege principal.  For example if an admin account is compromised by an attacker they would have all of the access rights granted to that admin employee.  If the admin account never had access to the Internet or email it would be more difficult to exploit (Solomon 2011).
  8. Identify and authenticate access to system components.  This requirement pertains mostly to employee.  For any employee accounts there needs to be an employee id issued.  This will allow for better audit tracking.  Policies must be put in place by the IT department to ensure employees use strong passwords.  In Windows this can be regulated by the GPO Management editor (Solomon 2011).  Security Policies must also be put in place to revoke any and all rights to employees that are terminated as soon as the decision is made to terminate them.  Other policies must control the number of unsuccessful attempts to access privileged area of the network or data with the account being locked out for a certain time period or by unlocking the account by IT staff.   For employees who work remotely two-factor authentication or better must be used to gain access to network resources.  For external vendors account must be set up specifically for them with the minimum number of permissions granted to allow them to do their job as vendors.  The vendor accounts should not have access rights to card holder data (PCI DSS requirements).
  9. Restrict physical access to card holder data.  This sets up requirements for the physical locations of where the servers that contain card holder data lie.  Despite all of the security policies or restrictions we put in place if an attacker gain physical access to a server they could install malware physically on the server or copy data from the server.  To prevent this servers and physical copies of card holders data must be physically secured from access by employees that are not authorized.  Servers must be secure by lock and key preferably in a server room or closet.  I remote camera or video camera must be installed at the access points with the goal of recording anyone who enters the room with the data being recorded and kept for at least three month unless restricted by local or state laws.  Network jacks in public area that connect to the network must be disconnected or turned off to restrict unused network ports from being used.  Physical access to Wireless Access Points (WAP) should be restricted from the public or located where the public can not gain physical access to them.  For vendor a easily identifiable way to recognize them must be developed to ensure employees know they have permission to be in non public areas.  When visitors check in their identities must be verified by calling the vendor and confirming who the employee is or verifying the vendors id through an id card or other credentials (PCI DSS requirements).  This is an important step to prevent attacks by social engineering (Hadnagy 2011).
  10. Track and monitor all access to network resources and card holder data.  This requirement sets up the requirements for auditing.  Minimum requirements for the data that the audits track are who accessed card holder data, who accesses audit logs, any access by employees with root or administrative access, unsuccessful login attempts, pausing stopping or changing audit logs, and who creates or deletes system logs.  Other audit logs must record  the user identification, Date and time, type of event, event origins, and success of failed attempts.  For auditing purposes all audits and events must use the same standard of time recording.  If the different audit logs all use different standards of time tracking the audit process will be more difficult if an attack occurs and you want to find out what happened.  Updating and verifying of accurate timing must use industry accepted methods.  Access to time data must be protected to prevent anyone from tampering with the time on the servers or workstations to prevent time discrepancies that would be auditing difficult.  Audit logs and trails must be prevented from being altered.  Often attacks will delete or change audit logs to hide evidence of their actions.  Audit logs must be reviewed daily to ensure they are working properly and that they are recording data correctly.  Any anomalies must be followed up to verify that there have not been any breeches or data or protocol (PCI DSS requirements).
  11. Regularly test security systems and processes.  WAP must be tested for there presents and whether or not the security protocols are working.  If guest accounts are presents the accounts must be tested to ensure that they have the proper access permissions given to them and that they are restricted from card holder data.  Network scans using Microsoft Baseline Security Analyzer (MBSA) or Nessus must be used at least quarterly or as needed if the network infrastructure changes.  Procedures must be put in place for regular penetration testing using industry standards.  If during the penetration testing any exploitable vulnerabilities are found they must be fixed and verified that they are corrected with another test (PCI DSS requirements).
  12. Maintain a policy that addresses information security for all personnel.  This requirement focuses on employees and there knowledge of information security.  Regular and consistent training and education is essential for the security process to be effected.  Security policies must be established, published, maintained, and employees must be made aware of them. Employees should review and sign an Acceptable Use Policy (AUP) prior to an employee account being created (Solomon 2011).  To reduce risk from an insider attack background checks should be required before employees gain account access to privilege card holder data (PCI DSS requirements).

These are the twelve requirements of PCI DSS.  They have not change since its exception but the sub requirement have change with each new version.  Individually, each requirement is a good idea and would enhance the overall security of any network.  These are  good basic security policies.  However there are some criticisms to the PCI DSS standard.   Common complaints are that it distracts from the IT departments job (Rothke 2009).  The counter to this argument is that when looked at individually and as a whole each requirement is a good idea and often considered a “best practice” for information security.  Another criticism is in the way that it is implemented.  Mathew J. Schwartz interviewed John P. Pironti the president of risk and information security for IP Architects and is quoted as saying:

“Security by compliance, doesn’t do a company any favors, especially because attackers can reverse-engineer the minimum security requirements dictated by a standard to look for holes in a company’s defense.” (A near scam)

Looking at the requirements of PCI DSS it would be easy to disagree with him.  For example the task of changing all of the default passwords.  Reverse-engineering passwords would be more difficult if left in default.  If the PCI DSS standards are only implemented in a check in the box fashion and never looked at again then it is easy to understand.  But even the PCI SSC website understands that the PCI DSS is only the basics for information security.  It was not meant to be the “be all, end all”  of data security.  These are minimum standards that will help companies from being held liable for data breeches and data loss.  Another complaint about the PCI DSS standard is that it is too costly to implement.  The counter to this argument is that there are four different levels of PCI DSS compliance.  Level 4 is 20,000 transactions or less per year.  Level 3 is for businesses that have anywhere from 20,000 to 1,000,000 transactions with any of the credit card vendors per year.  Level 2 is from 1,000,000 to 6,000,000 and Level 1 is 6,000,000 or more transactions per year.  Each level requires more actions then the previous and with that incurs more cost.  However, if a businesses has a security mindset and cares about customer data these actions wouldn’t be as expensive and would probably be implemented anyway (PCI DSS requirements)

Bruce Schneier, a popular security professional looks at regulation this way:

“Regulation–SOX, HIPAA, GLB, the credit-card industry’s PCI, the various  disclosure laws, the European Data Protection Act, whatever–has been the best stick the industry has found to beat companies over the head with. And it works. Regulation forces companies to take security more seriously, and sells more products and services.” (Schneier 2008)

Industry self-regulation works and is much more advantageous then government regulations.  The changes that can be made will be quicker to implement in order to keep up with the changing security environment.  If any changes are made that effect the credit cards bottom line they can introduce new changes or go back to older standards.  If companies choose not to comply with the PCI SSC standards they could choose not to accept credit cards and accept cash only.  However more and more companies are doing business online and with that the need for industry standards are a must to protect card holder data.

In the podcast Wh1t3Rabbit the speakers make a good point in stating that “information security is a process not a product” (Jardine 2015).  Some of the criticisms come from thinking that if they pay a Qualified Security Assessor (QSA) to assess their business and help get them into compliance that everything will be OK and nothing bad will happen.  But this thinking is counter to the basic security process.  A first principal that should be considered during the security process is continuous improvement (Smith 2013).  Once a company become PCI compliant they shouldn’t stop the process until the next time they get assessed.  They should always be checking for improvements in how they operate and how they secure there data.

In conclusion the PCI DSS is the right process to keeping card holder data secure.  Implementing the twelve security requirement to bring a company into compliance will help that company from being held liable and getting sue by card holders.  This is also a credit card industry standard created by the credit card vendors.  If data and identities are stolen and that leads to money loss for customers the credit card vendors are the ones that have to deal with returning the stolen money to the card holders.  It is only right that if the credit card vendors are liable to their customer then they can require businesses to take certain steps to protect the data and their customers.

References

PCI Security Standards Council, LLC. (Nov 2013). PCI DSS Requirements and Security Assessment Procedures Version 3.0 Retrieved from https://www.pcisecuritystandards.org/documents/PCI_DSS_v3.pdf

Jardine, Rafal. Santarcangelo, Michael. Man, Jeff. (January 5, 2015).  PCI DSS and Security (Yes, really).  Retrieved from http://podcast.wh1t3rabbit.net/dtsr-episode-124-pci-dss-and-security-yes-really

Klemic, Kane.  (2012).  Payment Card Industry Standards and the Sony Data Breach.  Retrieved from http://www.armaedfoundation.org/pdfs/Klemic_Payment_Card_industry_2012.pdf

Schneier, Bruce (2008)  Bruce Schneier reflect on a decade of security trends. https://www.schneier.com/news/archives/2008/01/bruce_schneier_refle.html

Smith, Richard E.  (2013).  Elementary Information Security.  Burlington, MA:  Jones & Bartlett Learning.

PCI Security Standards Council, LLC (n.d.).  What is the PCI Security Standards Council?.  Retrieved from https://www.pcisecuritystandards.org/security_standards/role_of_pci_council.php

Solomon, Michael G. (2011).  Security Strategies in Windows Platforms and Applications.  Sudbury, MA:  Jones & Bartlett Learning.

Hadnagy, Chrisopher. (2011).  Social Engineering:  The Art of Human Hacking.  Indianapolis, IN:  Wiley Publishing, Inc.

Rothke, Ben.  (2009 Apr).   PCI Shrugged: Debunking Criticisms of PCI DSS.  Retrieved from http://www.csoonline.com/article/2123972/compliance/pci-shrugged–debunking-criticisms-of-pci-dss.html

No Author.  (2013 July).  “A near scam”-  Criticisms of the Payment Card Industry Data Security Standard.  Retrieved from http://wemakewebsites.com/blog/a-near-scam-criticisms-of-the-payment-card-industry-data-security-standard

Rules for Computing happiness

Filed under: Uncategorized — Tags: — Ken @ 8:08 pm

I am going to post a list of rules that Alex Payne uses to keep his sanity while using computer technology.  The list was posted way back in 2008 and it is surprising how well they have held up.  The only rule that I would probably change is the reliance on cloud computing.  I would try to do anything that I could to stay away from using cloud services because companies have continually proven that they are not responsible enough to handle everyone’s PII.  Technology has come a long way and for now you can get quite a lot accomplished with Raspberry Pi’s and other smaller computers.

I plan on adopting most, if not all of the rules.  I am also slowly but surely dialing back what I do online with social media.  I am currently writing a pretty big post about social media in general.  Stay tuned.

Here is the original link to Alex’s rules for computing happiness https://al3x.net/posts/2008/09/08/al3xs-rules-for-computing-happiness.html.

A list.

Software

  • Use as little software as possible.
  • Use software that does one thing well.
  • Do not use software that does many things poorly.
  • Do not use software that must sync over the internet to function.
  • Do not use web applications that should be desktop applications.
  • Do not use desktop applications that should be web applications.
  • Do not use software that isn’t made specifically for your operating system. (You’ll know it when you see it because it won’t look right or work correctly.)
  • Do not run beta software unless you know how to submit a bug report and are eager to do so.
  • Use a plain text editor that you know well.  Not a word processor, a plain text editor.
  • Do not use your text editor for tasks other than editing text.
  • Use a password manager. You shouldn’t know any of your passwords save the one to your primary email account and the one to your password manager.
  • Do not use software that’s unmaintained.
  • Pay for software that’s worth paying for, but only after evaluating it for no less than two weeks.
  • Thoroughly delete all traces of software that you no longer use.

Hardware

  • Do not buy a desktop computer unless your daily computing needs include video/audio editing, 3D rendering, or some other hugely processor-intensive computing task.  Buy a portable computer instead.
  • Do not use your phone/smartphone/PDA/UMPC for tasks that would be more comfortably and effectively accomplished on a full-fledged computer.
  • Use a Mac for personal computing.
  • Use Linux or BSD on commodity hardware for server computing.
  • Do not use anything other than a Mac at home and Linux/BSD on the server.
  • The only peripheral you absolutely need is a hard disk or network drive to put backups on.
  • Buy as large an external display as you can afford if you’ll be working on the computer for more than three hours at a time.
  • Use hosted services in lieu of hosting on your own hardware (or virtual hardware) for all but the most custom applications.

File Formats

  • Keep as much as possible in plain text. Not Word or Pages documents, plain text.
  • For tasks that plain text doesn’t fit, store documents in an open standard file format if possible.
  • Do not buy digital media crippled by rights restriction technologies unless your intention is to rent the content for a limited period of time.

A study of the cumulative effects of database attacks

Filed under: Uncategorized — Tags: — Ken @ 8:06 pm

Database Attacks:  A Study of the Cumulative Effects of Database Attacks

            One of the first large-scale data breaches occurred in 2005 by American Online, better known as AOL.  From that breach over 92 million records were stolen.  Since then, the news has been peppered with more and more stories about bigger and bigger data breaches.  In 2007, hackers stole 94 million records that contained the customer information from stores such as TJ Max, Marshalls, and Ross.  In 2009 76 million records were stolen from a laptop that had an unencrypted hard drive that belonged to the Department of Veterans Affairs.  More recently in 2014, Target was the target, no pun intended, of hacker that stole 70 million records.  In 2012 the U.S. CERT conducted an analysis of 47,000 incidents and 621 confirmed data breaches and found that finance (37%) and retail companies (15%) led the way in data breaches (Keanini, 2014).  Most recently the U.S Office of Personnel Management lost over 25 million records of individuals that had applied for security clearances over the last 20 years.  Obviously, data breaches are a large problem, and they are only getting worst.  Not worst in the sense in numbers but worst in the type of data that is being stolen.

Since businesses started using computers and mainly since they started conducting their affairs online, there has been an exponential increase of data being collected.  This data runs anywhere from car dealerships collecting the financial data and credit histories of car buyers, lawyers collecting information about cases, and the government collecting information about its employees.  The type of information being collected will usually help determine who the likely hacker’s motivations will be in trying to get that information.  For example if the government is collecting and storing all of the information gathers from an SF86 form, state-sponsored hackers would most likely be the ones trying to steal that information so that they can build dossiers on high-value targets to exploit.  Having past drug convictions or credit histories would be very helpful when attempting to turn an individual into a spy for that country.  Law firms that collect data and store notes about cases would be helpful for the other party to that they could build counter arguments during the trial.  Any of these reasons and more are likely motivations for hackers to steal information from businesses and offices.

One problem that is not being addressed is the cumulative effects that these breaches can have.  The government and most businesses seem to think that these breaches occur in a vacuum and have no other outside effect other than what can be gathered from that data breach.  The problem, however, is that the data collection is so prolific that the normal users use the same answers for security questions, the same passwords, and the same contact information.  Think about it, how many passwords could I reset if I knew a user’s name and just a little information like their mother’s maiden name?  Likely quite a few.  The problem can be even worse for the small to medium companies that don’t have enough money to hire proper information security employees.  If a user were to use the same information from an account with a small company as an account with a larger company thing could quickly get out of hand.  There have been more than a few cases of users having terrible security practices.  These individuals use the same passwords, same usernames, and easy to remember passwords for all of their accounts.  So, when one account is breached the hacker can quickly go to additional accounts and gain unauthorized access to those as well.  We are failing to understand the cumulative effects of the information released from database attacks.

The easiest way for hackers to gain access to the valuable information that companies collect are to attack its databases.  According to a white paper written by Imperva the top three types of attacks on databases for 2015, are excessive and unused privileges, privilege abuse, and input injections (Imperva, 2015).  Input injections replaced SQL injection in 2015 due to the increased use of “big data”, technologies that use NoSQL type technologies.  NoSQL languages like MongoDB (Imperva, 2015), are still susceptible to injection type attacks similar to SQL injection.

Excessive use and unused privileges are the number one vulnerability for databases.  The reasons for this are numerous.  Often, when an employee is hired, they are set up with a new employee account with the permissions that they need to do their job.  As their experience grows, and they get promoted they often keep their older permissions and acquire new permissions for their increased responsibilities.  After some years, these employees may end up having permissions and access to everything the company has.  This happens mostly because of a weak network security policies.  If that employee falls victim to malware and has their account compromised that hacker that has access to that account has access to everything that the employee has.  They may not even have to exploit a vulnerability.  On the other end of the spectrum, if an employee gets fired that employee still has access to everything specified in their permissions.  They may be encouraged to download or delete sensitive data belonging to the company on the databases.  The best practices to mitigate these problems are to routinely audit employee accounts to determine whether or not the employees have the correct permissions.  This would follow a common best practice of “least privilege” (Oriyano, 2010).

Privilege Abuse is the second highest vulnerability to databases.  This can be related to an employee stealing company information in an insider attack.  However, this is also because employees may know how to get around the company policies and abuse the privileges they have, or they can find a workaround to get the permissions they don’t have.  For example, if an employee has permissions to view files but no to overwrite them, they could find a workaround by accessing the file with a different type of program that allows them to change to the information.  This may not always be malicious, but it still can put the company’s assets in jeopardy if a hacker can exploit the same vulnerability.  The best way to mitigate this threat is to ensure that employees have to correct, and necessary rights granted to their accounts.  If employees need extra rights, they can have them granted with some limitations on the time those permissions are granted (Oriyano, 2010).

The third vulnerability to databases are injection attacks.  Better known as SQL injection attacks.  However, as stated previously injection attacks are no longer focused only on SQL database type languages anymore.  With the proliferation of “big data”, NoSQL type languages like MongoDB are now vulnerable to injection attacks.  One of the reasons databases are so vulnerable to injection attacks are that very little money is invested in making them secure.  In 2015, IDC reported that less than 5% of the $27 billion spent on security products directly addressed data center security (Imperva, 2015).  This either shows a lack of knowledge of how vulnerable databases are or that companies fail to assess risks appropriately and after having spent most of their money on development, have little left to devote to secure testing and development.  The best way to mitigate injection attacks are to keep networks and application updated with patches.  Constantly scan for vulnerabilities and if any are found patch them or sandbox them (Smith, 2013).  Malicious web request should be denied by properly configuring firewalls.  Intrusion detection systems (IDS) and intrusion prevention systems (IPS) should also be installed to monitor all database activity (Oriyano, 2010).  If any unusual activity is detected, account should be locked out or denied access until the IT department is notified.  This could stop many of the problems.

So why are databases so targeted by hackers?  Mostly, it is because of the information that is contained in them.  Depending on the hacker’s motivations, a company can understand a lot about the hacker’s motivation and intentions when they steal information from a company.  All of these attacks on company resources are usually to steal information.  If a hacker steals information about future projects, you could deduce that they want the information to sell to a company’s competitors.  Doxing is another recent phenomenon that hackers have started using.  Short for dropping, doxing is the collecting of private information about a company or individual and then releasing it to the public (C.S-W, 2014).  Doxing a target is used to shame them or to extort money from them.  A hacker group called Anonymous was famous for Doxing several companies or organizations (Zetter, 2014).  In fact, the recent attack on Sony Corporation in 2014 was a prime example of a doxing attack on a company.  North Korea decided to hack into Sony’s networks because of a movie that was to be released about the killing of their leader Kim Jong Un.  The hackers broke into Sony’s networks and stole hundreds of gigs of data and then release them onto the Internet.  This was an embarrassment to Sony because to the information that was released on emails.  These emails contained several negative remarks about actors and actresses on upcoming movies.  Amy Pascal, a Sony Co-chairman was fired because of the information released in this attack.

The recent data breach on the U.S. government’s Office of Personnel Management (OPM) found that over 25 million records were copied and stolen.  This information included all the information on the notorious SF86 form.  This is the form that someone needs to fill out that is used for background checks.  All drug histories, convictions, and citations, and employment history is included.  The U.S. government is pretty sure that the attack came from China.  Espionage, in this case, would probably be the motivation.  Almost any country would love to obtain this type of information so as to build dossiers on individuals to attempt to blackmail them for information or to turn them into spies (Austin, 2015).

The most common motivations for hackers to steal information on databases is identity theft.  Most identity theft if focused on stealing money.  If hackers steal health insurance information, it is likely to steal money by charging for false insurance claims.  If the hackers steal information from an e-commerce site, it likely is to make purchases with stolen identities.  In 2012, identity theft cost the U.S. 24.7 billion dollars (Rotter, 2014).

In 2015, the Internal Revenue Service (IRS), disclosed that they discovered that 104,000 transcripts were released of individuals that did not request them (Harney, 2015).  The IRS has since pulled the “Get transcript” button from their website.  It seems that the hackers were able to obtain the transcripts by following the normal procedure needed to get a transcript.  To get a transcript, an individual needs to have their Social Security Number (SSN), email address, and some other personal information.  None of this information would be difficult to get considering the millions of different records that have been stolen in recent years.  Even more information can be purchased online on the black market website that sell stolen information.  A document from Dell SecureWorks found that you could get reliable account information to include the account number of credit cards for as little as nine dollars each (Dell, 2014).  You could also buy Distributed Denial of Service (DDoS) attacks or Dox attacks for just as little (Dell, 2014).  The information could have also been stolen from mortgage companies by stealing a form called 4506-T from mortgage companies.  This form is what the mortgage companies’ use when going thru a third party vendor to have access to IRS transcripts via Income Verification Express Service (IVES).  The IRS was only able to discover this breach because over 35,000 account holders have already filed their taxes.  An attempt was made on over 200,000 accounts, but only 104,000 were successful (Warren, 2015).

This breach shows an easily exploitable weakness in the way website verify identity online.  Plus with almost every single website requiring someone to sign up or register for an account, users are having difficulties remembering their account passwords and usernames.  This leads to users recycling the same usernames and passwords over and over again.  Massive Data breaches do not leak data in isolation.  Each time personal data is leaked it makes users more vulnerable to these types of information theft.  The solutions to help mitigate these problems are two-fold.  On the user side, users need to learn and understand the threats and risks of how they operate online.  On the development side, developers need to learn and use secure coding practices to avoid injection attacks and other exploits.  System Administrators and network operators need to stay vigilant and constantly update and scan networks for vulnerabilities to patch anything that is discovered.  Most of these solutions do not cost much, are mostly knowledge based, and can make for a much more secure online environment.

References

Warren, Z. (2015). IRS a data breach victim, 104,000 taxpayers’ records stolen. Inside Counsel.      Breaking News, Retrieved from http://search.proquest.com/docview/1683739992?accountid=8289

Keanini, T. K. (2014). Security: Beyond Target and Neiman Marcus More of the Same Everywhere Else. Database and Network Journal, 44(2), 6.

C.S-W (2014). What doxing is, and why it matters: The Economist explains. London: The Economist Newspaper NA, Inc.

Rotter, Kimberly. (2014). The staggering cost of identity theft in America. Credit Sesame. Retrieved from http://www.creditsesame.com/blog/staggering-costs-of-identity-theft/

Dell Secureworks (2014). Underground Hacker Markets. Dell Secureworks. Retrieved from http://www.secureworks.com/assets/pdf-store/white-papers/wp-underground-hacking-report.pdf

Harney, Kenneth. (2015). IRS data breach may prove worrisome for those seeking a mortgage. The Washington Post. Retrieved from https://www.washingtonpost.com/realestate/irs-was-told-in-2011-that-its-security-and-privacy-controls-were-inadequate/2015/06/01/de42884a-0886-11e5-95fd-d580f1c5d44e_story.html

Zetter, Kim. (2014). Sony got hacked hard:  Here’s what we know and don’t know so far.  Wired.  Retrieved from http://www.wired.com/2014/12/sony-hack-what-we-know/

Oriyano, S. (2010). Hacker Techniques, Tools, and Incident Handling. Sudbury, MA: Jones & Bartlett Learning.

Smith, R. (2013). Elementary Information Security. Burlington, MA: Jones & Bartlett Learning.

Austin, Ernie. (2015). Stolen Security Clearance Information Has Potential for Blackmail. Rockaway: Advantage Business Media.

Database Security Best Practices

Filed under: Uncategorized — Tags: , — Ken @ 8:05 pm

Database Security Best Practices

            So far, in the year 2017, there has been a vast number of data breaches that have affected an astronomical number of user accounts.  A lot of the data exposed in these data breaches came from databases not being secured.  From Verizon, 14 million records of customers were exposed when they were left exposed on an Amazon S3 storage server (Wittaker, 2017) to the more recent Equifax data breach, a large number of these data breaches could have been prevented had the companies followed the current best practices on how to secure and harden databases.

Three of the most popular database applications used today are Oracle Database (Oracle), Microsoft SQL, and MySQL each have their ways to be secured against hackers.  The first database that will be covered is the Oracle database.  One of the first things that a database administrator (DBA) needs to do once the database is installed is to lock down the default accounts.  These default accounts have privileged access to defined preset values that are common knowledge to hackers and IT professionals.  The DBA can use the Oracle Database Configuration Assistant (DBCA) to automatically lock or expire the default accounts (Stark, 2012).  Updating and patching the Oracle DB is the next critical step in securing the database.  If the updates are not applied almost immediately the database will be exposed to well-known security vulnerabilities.  The next step is the reduce the attack surface by removing all unnecessary privileges from public roles.  There are specific functions that exist in Oracle that belong to a role called PUBLIC.  PUBLIC roles belong to every account user, and that is where the problem lies.  By eliminating services that belong in the PUBLIC roles, DBA can follow the principle of least privilege better by only allowing users the permissions they need to do their jobs.  Next, auditing should be enabled to allow Oracle to log SQL commands (Stark, 2012).  This is not enabled by default so if a security incident occurs, it will be more difficult to understand what happened.  Another good practice involves setting up database triggers that log and audit changes in the database schema.  A database schema in Oracle is the logical container for data structures.  Examples of schema objects are tables or indexes (Ashdown, 2010).  By setting up certain triggers, you can audit logs to find out who logged on and what changes were made to the schema.  This can be helpful for when a DBA makes changes that affect the functionality of the databases or when hackers make changes to the DB.  Password Management is another practice that should be enabled for all accounts.  The same password policies that you would use in Microsoft Active Directory could be applied here.  It is important to know that the default accounts do not have password management enabled on them (Stark, 2012).  The last best practice for securing Oracle databases is enabling encryption.  Oracle database supports the Federal Information Processing Standard (FIPS) encryption algorithm Advanced Encryption Standard (AES).  Enabling encryption ensures that the data sent from the database to the user is in the ciphertext and not plaintext (Huey, 2017).

The next major database used is MySQL.  One of the first things a DBA wants to do to secure the database server is to update and apply any security patches that are needed.  On top of that, the administrator should also update the Anti-virus and Anti-spam software as well as the OS itself that is running on the server.  Another consideration is remote access.  If remote access is not needed that feature should be disabled.  If remote access is needed you should configure the firewall only to allow that specific account and possibly computer to have remote access to the database.  To prevent unauthorized reading from local files, the DBA should disable the use of the command LOAD DATA LOCAL INFILE (Maman, 2015).  A one-way hacker can exploit this is by using the LOAD DATA LOCAL INFILE to read the /etc/passwd file into another table.  This would give the hacker access to all of the user accounts with passwords.  A fourth task for the DBA to do is to change the name of the administrator’s account.  By default, the account name is root and is well known as a common name for the admin account.  If the hacker gains access to the root account, they have access to the entire database.  If there are any old, anonymous, or obsolete accounts, those accounts should be removed and deleted.  By default, anonymous users accounts can exist without any passwords.  If this is left open, it would be easy to exploit.  The next thing that the DBA should implement is the principle of least privilege.  This reduces the attack surface for each account.  As with the Oracle database, the DBA should enable logging so that certain events can be monitored such as error logging.  Lastly, the database administrator should ensure that the data is encrypted.

The third major database application is Microsoft SQL.  Once the SQL server is installed, the DBA should begin hardening the server.  The administrator can use the SQL Server Configuration Manager tool to disable or remove any features or services that are not planned to be used.  This tool reduces the attacks surface and makes it less vulnerable to attacks.  Windows Authentication mode should be used over the built-in SQL authentication.  This is due to how the Windows authentication mode integrates with the Active Directory.  If the Windows Authentication mode is used, all of the security policies that are created within Active Directory can apply to the SQL server.   Otherwise, if SQL Authentication is used the DBA will have to create separate user accounts and recreate the account infrastructure.  This adds to the complexity of maintaining the user accounts so it should not be used.  The SA account should be disabled and renamed.  This will stop a hacker from searching for this account.  Instead, create new administrator accounts and use Role Based Access Controls (RBAC) to prevent anyone account from having access to everything (Ferraiolo Kuhn, 2017).  Security by obscurity is not an actual security control. However, it can make the hacker’s job more difficult.  So, the DBA should also consider changing the default ports used by the SQL server (Maman, 2015).  The administrators should also hide the SQL server instances or disable the SQL Server Browser service.

Securing and hardening the database will go a long way in securing the data that is used on the network but it is not the only thing that needs to be secured.  Using the concept of In-depth security the database can be secured even further.  There are several organizations that develop a list on how to secure the network.  SANS has a Top 20 Critical Security Controls list that is vendor neutral on how to secure computer systems.  SANS is a for-profit organization that develops cybersecurity and information security training and offers certifications for those courses.  SANS also created the Center for Internet Security where they created the SANS Top 20 Security Controls list.

The SANS Top 20 list is pretty though, and it is designed to be started on control number 1 and followed to control 20.  CSC 1 and 2 cover inventorying of all authorized and unauthorized devices and software on the network.  Because of the popularity of BYOD devices, it can become difficult for an administrator to know what is happening on their network.  If you do not know what is on your network, you can secure it.  CSC 3 deals with secure configuration for hardware and software on mobile devices, laptops, workstations, and servers.  After you understand what you have on your network, you can use the best-known security practices to harden those devices.  CSC 5 is the controlled use of administrative privileges.  This is also known as the principle of least privilege and needs to know.  By keeping the privileges in check and only giving those privileges to a small number of user accounts you can limit the exposure that the network has to vulnerabilities.  CSC 6 is maintenance, monitoring, and analysis of audit logs.  This control deals with how to setup and audit logs so that you can detect intrusions or security events if they happen.  CSC 7 establishes email and web browser protections.  Some of the actions in this control are to reduce the surface area of attacks by disabling or removing any services that you do not use.  CSC 8 is malware defenses.  Actions in this control are installing Anti-virus software, firewalls, and intrusion detection/ prevention systems.  CSC 9 is the limitation and control of network ports, protocols, and services.  Port security and management allows the network to turn off any unused ports so that no rouge devices can be installed.  Workstations and servers can also be assigned specific ports to use so that if they are moved, they will not have access.  CSC 10 deals with data recovery capability.  CSC 11 controls the secure configurations for network devices.  CSC 12 establishes controls for boundary defense.  This is where firewalls are configured, and DMZs are established.  CSC 13 deals with data protection.  This controls how encryption is used on the network.  CSC 14 controls access based on the need to know.  CSC 15 covers the use of wireless access points.  This is important today because of the use of cell phones and wireless printers.  CSC 16 establishes controls for account monitoring.  Security policies effective this control.  For Microsoft Server, it is easy to control and monitor accounts with Active Directory.  CSC 17 sets up security skills assessment and creates training for employees to fill in gaps in training and knowledge.  CSC 18 is application software security.  The OWASP Top ten list for application security can be used here.  CSC 19 established incident response and management.  Lastly, CSC 20 covers penetration testing and red team exercises.  As the IT security team works their way thru this list, their entire network will be more secure.  Not only the databases but the computers, networking devices, and the applications that access the data within the databases.

One of the trends of late has been that hackers have targeted databases.  Two reasons that hackers have target databases are that the databases have an enormous amount of data within them.  The other reason is that companies often overlook securing the databases because they are concentrating so much on application security and perimeter security.  System and database administrators are also sometimes reluctant to make any changes to the database because everything works and they do not want to make changes that could break the functionality.  From the Verizon 2010 Data Breach Investigations Report insiders are the cause of 46% of the data breaches.  The insiders do this by abusing their privileges (Barnes, n.d.).  By using the least privilege principle and continuously monitoring log files a company could prevent a lot of the insider threats.  It was also noted in the report that the stolen credentials were the second highest exploit used in the database breaches (Barnes, n.d.).  The first was SQL injection attacks.  SQL injection attacks have also been on the OWASP top ten list for the past several years.  Some other impressive statistics from the Verizon report were that 96% of the breaches in 2009 could have been prevented through the use of simple controls.  79% of organizations that had credit card data breaches in 2009 had failed their last PCI audit (Branes, n.d.).

Today, while we always hear about vulnerabilities like SQL injection, Cross-Site Scripting, and Distributed Denial of Service (DDoS) attacks, etc.  It is important to understand that most of those vulnerabilities are external to the databases themselves.  The risks that lie in the databases are more straightforward.  Poor password policies and practice, missing or unpatched databases, misconfigured databases, and excessive privileges are what the databases are exposed to.  Those vulnerabilities are neither complex nor are they difficult to fix.  By taking the time before the installation begins to understand the risks and vulnerabilities the IT administrators can implement policies and processes to mitigate those threats before they even begin.

References

Wittaker, Zack. (September 2017).  2017’s biggest hacks, leaks, and data breaches – so far.  Retrieved from http://www.zdnet.com/pictures/biggest-hacks-leaks-and-data-breaches-2017/3/

Stark, Chris.  (March 2012).  Top 10 Oracle Steps to a Secure Oracle Database Server.  Retrieved from http://blog.opensecurityresearch.com/2012/03/top-10-oracle-steps-to-secure-oracle.html

Ashdown, Lance. Kyle, Tom. (February 2010).  Oracle Database Concepts 11g Release 2 (11.2).  Retrieved from http://docs.oracle.com/cd/E29505_01/server.1111/e25789/tablecls.htm#CNCPT111

Huey, Patricia.  (April 2017).  Configuring Network Data Encryption and Integrity.  Retrieved from http://docs.oracle.com/database/122/DBSEG/configuring-network-data-encryption-and-integrity.htm#DBSEG9593

Maman, David.  (October 2015).  MySQL database – The world’s most popular open source database.  Retrieved from http://www.hexatier.com/mysql-database-security-best-practices/

Maman, David.  (October 2015).  Microsoft SQL Server Database Security Best Practices.  Retrieved from http://www.hexatier.com/microsoft-sql-server-database-security-best-practices/

Ferraiolo, David.  Kuhn, Rick.  (September 2017).  Role Based Access Control.  Retrieved from https://csrc.nist.gov/projects/role-based-access-control

Barnes, Rob. (n.d.).  Database Security and Auditing:  Leading Practises.  Retrieved from   http://www.bhamisaca.com/images/Database_Security_and_Auditing_Best_Practices.pdf

Firewall Policies for Industrial Control Systems

Filed under: Uncategorized — Tags: , — Ken @ 8:04 pm

Firewall Policies for Industrial Control Systems

            For this paper, the term Industrial Control Systems will be used as a generalized term for Supervisory Control and Data Acquisition Systems (SCADA), Programmable Logic Controllers (PLC), and Human Machine Interface (HMI).  ICS is the technology that connects the Information Technology (IT) world to the Operational Technology (OT) world (Bodungen, 2017).  It is being used every day to run power companies, oil refineries, space technology, and manufacturing plants.

Before this technology, when a company wanted to adjust a machine they would need an employee to manually change a machine until the desired outcome was reached.  Eventually in the 1960’s some of this technology was being connected to the popular mainframe computers from that time.  When personal computers started becoming popular in the 1990’s companies started to want to have more control and management of the OT technology.  ICS were used to connect the IT/OT on internal networks and later the internet.  With those connections came a host of new problems that Control Systems Engineers and Computer Engineers never had to deal with before.  All of the threats and vulnerabilities on the internet were introduced to this new technology.  On the one hand, you had vulnerabilities from the web, and on the other hand, you had all of the vulnerabilities that were on the ICS exposed to the world.  To be sure, those vulnerabilities have been there from the get-go, but they were able to be mitigated by workarounds and the fact that they were isolated from most external threats.

Another unique issue with ICS technology is that a lot of the systems are still using the original technology that was used when it was created.  Those systems have never been updated or upgraded.  Most of the time this is due to the nature of the OT requiring extremely high availability.  Imagine if a power plant had to shut down for even a little while to upgrade its equipment.  In reality most, companies never upgraded their equipment because it “just works.”

So, when the equipment that is used by companies become 20 or 25 years old and get connected to the internet, there are some interesting considerations that IT security professionals need to understand to secure those technologies and not brick them in the process.  If you had a computer from 20 years ago, it would be considered ancient today and would likely be almost unusable for what most people use computers for now.  One example of how these systems are unique is that most of the older technology has just enough power to run what it is supposed to do and no more.  Most “best practices” would say that you want to encrypt all of your communications between your devices and computers.  With ICS that may not be possible.  Industrial Control Systems are unique and therefore will need special considerations when developing security policies. This paper will discuss four different types of security policies out of the possibly hundreds of types out there.  The first policies are firewall policies.

Firewall are a major component of any network, and for ICS network it is no different.  Firewalls are used to filter the desired traffic from the undesired traffic.  One of the primary uses of firewalls is to provide for network segmentation and plays into the broader defense in depth strategy.  A popular model on how to segment ICS networks logically is the Purdue Enterprise Reference Architecture (PERA) or more commonly the Purdue Model (Bodungen, 2017).  The Purdue Model divide a network into six different levels labeled Level 5 thru Level 0.  Layer 5 is the top layer and is the Enterprise Layer.  This is the level that a corporate office and its network operate on.  Level 4 is also an enterprise level, but it is more for branch offices and the physical locations of where the equipment is located.  Layer 3 is the ICS-DMZ.  The Demilitarized zone is used the same way that a DMZ would be used on web applications.  This is the level where SCADA systems are located so they can be accessed by both the enterprise offices and they can communicate with the components below them.  Level 2 is the Area Supervisory Control and is where some components like PLCs and HMIs are located.  Level 1 is where most of the PLCs are located and where most of the actual controlling of the OT takes place.  Level 0 is the OT equipment (Bodungen, 2017).

There are eight overall goals when developing the security policies and rules for firewalls.  The first goal is to eliminate all direct connections from the internet to the process control network (PCN)/SCADA network otherwise also as the Level 3 ICS-DMZ in the Purdue Model (CPNI, 2005).  The second goal is to restrict access from Level 5 to Level 3 and below.  Very few, if any employees should have access to the lower levels of the ICS network.  As a best practice, this is an example of “least privilege” and “need to know” (CPNI, 2005).  Goal three is to allow but restrict access to the Level 3 by the Enterprise Level 4 and 5.  That access should also be restricted to only the servers and devices that are needed for compliance reasons like data historians and maintenance databases (CPNI, 2005).  Goal four is to secure remote access to the control systems.  Occasionally, third parties such as vendors or contractors will need access to the control systems.  This could be for emergency maintenance reason or for upgrading systems.  There should also be separate policies and firewall rules for vendors and contractors.  The fifth goal is to secure all wireless connections.  The sixth goal is to develop well-defined rules for what traffic will be allowed thru the firewall and what protocols are allowed.  The IT department will need to create Access Control Lists (ACLs) and ensure that the principle of “least privilege” and “need to know” are used (     ).  The seventh goal is to secure the connection between the firewall and management.  This is so that system administrators can monitor all traffic over a secure connection by using highly restricted management servers.  The last rule is to monitor all traffic and scan for any unauthorized protocols or unusual activity.  This can be achieved by using a firewall or an Intrusion Prevention/Detection System (IPS/IDS).

IDSs and IPSs are useful because they can give you greater flexibility in what to do with the traffic.  Firewalls generally will only deny, drop, or allow traffic based on the rules that were written.  If the telnet protocol is blocked, then no telnet traffic will be allowed to pass the firewall.  IDS/IPS, on the other hand, can block the connection and alert it IT department if any authorized activity is attempted on the network.  There are three primary detection methodologies used by IPS/IDSs.  The first detection method is the Signature-based detection method.  The signature can come in many forms, but some commons signatures are unauthorized protocols or unauthorized names.  For example, the name root should probably never be used as it is a well-known name and many hackers would try to use that name when logging in.  Other signatures could be from file names that would likely be filled with malware.  One area of information security that studies and creates signatures is Threat Intelligence.  Threat analyst will study malware or cyber attacks and create Indicators of compromise (IOC) that can be programmed into firewalls and IDS/IPS to stop attacks before they can happen.  This detection method is good for known threats (Scarfone, 2007).

The second detection method is the anomaly-based detection.  Anomaly-based detection can detect unusual activity that deviates from an initial baseline profile.  If the regular working hours are from 8 a.m. to 5 p.m. and activity is detected at midnight an alert would be triggered and either record the event in a log file or block the connection entirely (Scarfone, 2007).

The third detection method is Stateful protocol analysis.  Stateful protocol analysis is the latest detection method and is suitable for deep packet inspection as it can remember the “state” of the connection while inspecting the packets.  This is good for connections that have to be authorized.  When a user attempts to make a connection and is authenticated, the network device will remember that the connection was authorized and what was authorized.  While the stateful analysis IDS/IPS are the most capable the downside of using this is that it is extremely resource intensive and could slow network traffic down if there is too much traffic (Scarfone, 2007).

The National Institute of Standards and Technology (NIST) created two critical documents that can help with creating and applying security controls and policies.  NIST SP 800-82 r2 is the Guide to Industrial Control Systems Security and contains the best practices for security policies and controls.  These controls are based on the controls presented in another vital document, the NIST SP 800-53 r4 Security and Privacy Controls for Federal Information Systems and Organizations.  SP 800-53 contains most of the security and privacy controls that you would need to build your security policies from.  IT administrators should attempt to implement the controls in SP 800-52 first and where the controls are not possible, or feasible SP 800-53 contain compensating controls that can still achieve the same or nearly the same results.

As with the Internet of Things (IoT), more devices than ever are being connected online.  This trend will likely continue accelerating for the foreseeable future.  In fact, a new buzzword is called the Industrial Internet of Things (IIoT) and is the same things except that control systems are connected to the internet.  Another new trend in the ICS world is virtualization and cloud services.  For the same reasons that businesses are all started to move to the cloud industrial control systems are going to start moving there as well.  As this happens some of the same vulnerabilities that exist in cloud services will start appearing in the ICS networks.  With those new vulnerabilities security professionals will have to find new ways to secure the ICS networks.

References

CPNI.  (February 2005).  Firewall Deployment for SCADA and Process Control Networks.  Retrieved from https://www.ncsc.gov.uk/content/files/protected_files/guidance_files/2005022-gpg_scada_firewall.pdf

Bodungen, Clint. Singer, Bryan. Shbeeb, Aaron, Hilt, et al. (2017). Hacking Exposed: ICS and SCADA Security Secrets & Solutions.  McGraw Hill Education New York: NY

Scarfone, Karen. Mell, Peter.  (February 2007).  Guide to Intrusion Detection and Prevention Systems (IDPS).  Retrieved from http://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-94.pdf

Stouffer, Keith. Pillitteri, Victoria. Et. Al. (May 2015).  Guide to Industrial Control Systems Security Retrieved from http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-82r2.pdf

Joint Task Force Transformation Initiative.  (April 2013). Security and Privacy Controls for Federal Information Systems and Organizations.  Retrieved from  http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-53r4.pdf

Cruz, Tiago.  Simoes, Paulo, Et. Al.  (July 2016).  Security implications of SCADA ICS virtualization: survey and future trends.  Retrieved from https://www.researchgate.net/publication/305725280_Security_implications_of_SCADA_ICS_virtualization_survey_and_future_trends

Computer Fraud and Abuse Act

Filed under: Uncategorized — Tags: , , — Ken @ 8:03 pm

The Computer Fraud and Abuse Act (CFAA), otherwise known as 18 U.S.C. §1030 was enacted in 1986.  This law superseded the Comprehensive Crime Control Act of 1984.  The CFFA was written to address the ever-evolving computer crimes and increasing the scope of the regulations.  It added tougher criminal sanctions and limited the federal jurisdiction to cases that only involved the federal government.  Since CFAA was enacted in 1984, it has been amended several times by itself and also by the USA PATRIOT Act.  CFAA has been controversial since it has been passed.  Several famous cases involving the use of the CFAA have shown some of the weaknesses and the strengths.  The current version of the Computer Fraud and Abuse Act (18 U.S.C. § 1030) is ineffective in dealing with security researchers and should be amended.

Originally, the CFAA was intended to protect the federal government’s interest by criminalizing certain acts with computers.  This included unauthorized access, trespassing, and added language to include altering, damaging, or destroying information.  Trafficking in passwords was an additional section that was added.  In the 1980s and 90s, the United States was still in a cold war with the Soviet Union.  So, the target for the CFAA was hackers and spies.

In 1994 the CFAA was further amended to add civil penalties for violations of the act.  The language was also expanded to include a new threat at that time that would come to be known as malware.  The specific language was “knowingly causing the transmission of a program, information, code, or command which intentionally causes damage without authorization.”  Since 9/11, the CFAA has been amended by the USA PATRIOT Act to increase its scope and penalties.   This law is important and very much needed to prosecute hackers and criminals and also to protect companies and the government’s physical and intellectual property.  However, there are several problems with the language of the law that need to be reformed to prevent abuses.

The CFAA has several problems.  The biggest problem is that the language in the CFAA is very vague (Lofgren, 2013).  For example, what is “unauthorized access”?  Is it accessing a computer or resource on the internet in a way that evades the standard username and password process that is written by code?  Alternatively, could it be a violation of a website’s Terms of Service (TOS)?  In fact, it is both.  The first example is when a hacker could use hacking tools such as password crackers to “crack” the password for a user account and then enter the system “without authorization.”  That is an easy one.  What if though a business or home has an open Wi-Fi network without a password and your cell phone sees it, logs on automatically, and uses the network without any authorization from the business or family.  That is still a felony.  The second example is so common that practically every child commits a crime every time they use the service.  The TOS page that everyone clicks to agree to without reading the document could be violating the terms.  A perfect example if Facebook.  To date, a user has to be at least 13 years old to use Facebook.  How many children or parents for that matter just click agree and create a Facebook account with reading the TOS.  They are committing felonies.  The age thirteen is significant, however.  The Children’s Online Privacy Protection Act (COPPA) protects children under thirteen years old from privacy violations (Graber 2014).  An excellent example of a court case involving the CFAA by violating the TOS is the case against Lori Drew.  In 2008 she was charged with hacking by creating a false account on MySpace to essentially bully a teen girl that had a fight with her daughter.  The girl ended up committing suicide.  The public was obviously outraged by what happened by there were no laws on the books for cyberbullying (Zetter, 2014).  So the courts charged her with violation of the CFAA.

Another famous case was against Aaron Swartz.  Aaron Swartz was a computer genius and child prodigy that was partly responsible for creating the RSS protocols and the Creative Commons Licensing Framework.  He was arrested by the police after breaking into the MIT network and server room to download over four million scholarly article from JSTOR.  He did this by using a guest account issued to him by MIT.  JSTOR is an online database of digital articles and papers that have been published in academic journals.  It is used by universities all over the world, but users must have a subscription to access the database.  Even though Swartz reached a deal with JSTOR that dropped their charges, the federal courts still charged him with violating the CFAA.  His potential penalties were a one million dollar fine and 34 years of imprisonment (Zetter, 2015).  This was due to him being prosecuted by thirteen counts of computer crimes.  Which bring us to the next problem with the CFAA.

Currently, with the way the federal government charges the defendants, there is a tendency to overcharge by including every event or account into the charges.  So, each time someone accesses an account he or she can be a charge for one crime, even if all of the violations happened during the same event or time.  Another famous court case involved Fidel Salinas, who was charged with 44 felony counts of the CFAA.  Each count had a potential ten-year sentence.  The federal courts counted each time he accessed a victim’s account.  After it was all said and done, he only served six months and was fined $10,000.

Despite the bad examples of court cases, there are valid reasons this law exist.  The information that is contained on the federal governments and military computers is precious for both privacy and espionage reasons.  Unauthorized access to these computers should be prosecuted.  The banking and financial sectors and private sectors also have good reason to be protected.  However, as the law is now and has been used it needs to be amended to define some of the vagueness of the law and to change the way the law works to prevent it from being abused.

The Electronic Frontier Foundation (EFF) is a non-profit organization that was founded in 1990 to protect civil liberties in the digital world.  They have proposed several changes to the CFAA.  The first one is to limit the criminal law to actual intrusions and harm.  Users should not be sent to prison for violating a website’s TOS.  The laws should not be written in a way that allows everyone to violate the law and then only prosecute the individuals that the company or federal government wants.  Nobody should be able to “cherry pick” whom he or she want to charge.

The section on trafficking in passwords needs to be amended to prevent anyone from going to prison for sharing their passwords with family or friends.  So, for those with Netflix or Hulu account, they will be safe if they share their passwords with their roommates.

EFF also proposes to eliminate two provisions of the CFAA.  §1030 (a)(3) and §1030 (a)(4) (Cohn 2013).  This is because of the way that they are being used to double charge, someone.  The first section criminalizes accessing without authorization and the second section criminalizes “knowingly and with intent to defraud” access to a computer without authorization.  So, someone could be charged with both sections for the same act.

Also, proposed by the EFF is to make more punishments as a misdemeanor rather than felonies.  Felonies carry with it several indirect punishments that make them inappropriate for every violation.  An example is with people that are not citizens.  Even if someone is a lawful permanent resident he or she could be immediately deported if he or she receive a felony.  Misdemeanors punishments can still be substantial.  A judge could sentence someone to jail for one year and a $100,000 fine (Cohn 2013).  This could be more than adequate for most of the violations.

Since the original law was enacted in 1986, there were needs for this law.  It has even been amended several times as the technology has been more integrated with life in general.  However, the scope of the law has been exceeded outside of the intent, and it has been abused by both the government and by companies using it to extend the punishments.  Or, by throwing as much law violations at the defendants in hopes of making them plead guilty.  Therefore, the Computer Fraud Abuse Act should be amended to correct these violations.

References

Cohn, Cindy. Hofmann, Marcia.  (February 2013).  Rebooting Computer Crime Law  Part 2:  Protect Tinkerers, Security Researchers, Innovators, and Privacy Seekers.  Retrieved From.  https://www.eff.org/deeplinks/2013/02/rebooting-computer-crime-law-part-2-protect-tinkerers-security-researchers

Zetter, Kim.  (November 2014).  Hacker Lexicon:  What is the Computer Fraud and Abuse Act?  Retrieved from https://www.wired.com/2014/11/hacker-lexicon-computer-fraud-abuse-act/

Lofgren, Zoe. Wyden, Ron.  (June 2013).  Introducing Aaron’s Law, A Desperately Needed Reform of the Computer Fraud and Abuse Act.  Retrieved from https://www.wired.com/2013/06/aarons-law-is-finally-here/

Graber, Diana.  (October 2014).  3 Reasons Why Social Media Age Restrictions Matter.  Retrieved from http://www.huffingtonpost.com/diana-graber/3-reasons-why-social-media-age-restrictions-matter_b_5935924.html

Zetter, Kim.  (October 2015).  The Most Controversial Hacking Cases of the Past Decade.  Retrieved from https://www.wired.com/2015/10/cfaa-computer-fraud-abuse-act-most-controversial-computer-hacking-cases/

Auditing Industrial Control Systems

Filed under: Uncategorized — Tags: , — Ken @ 8:00 pm

Auditing Industrial Control Systems

            Some of the first Industrial Controls Systems (ICS) came about in the 1960’s when computers were first starting to become popular.  These ICS were mainly used by the old mainframe computers and were used to control machines and sensors for industries such as the oil and gas industry and the electric grid industry.  As technology increased and the size and power of computer decrease ICS evolved by becoming more integrated into just about every aspect of life.

ICS are most a general term for several different types of systems.  ICS is made up of Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS), and Programmable Logic Controllers (PLC).  These three systems, along with sensors companies the power to control anything from the generation of power in the electric grid, drilling in the oil and gas industry, and in the manufacturing of raw materials like metals or plastics.

Lately, the security and weakness of the United States’ power grid have come into the spotlight due to the prevalence of cyber attacks.  An excellent example of what could happen during a cyberattack on the U.S. is the Aurora Generator Test.  In 2007, at the Idaho National Laboratory, a test was conducted to simulate a cyber attack on an electric grid.  An Aurora Generator was used during the test and was connected to a power substation.  To simulate the cyber attack, specially design code was sent to the generator to open and close circuit breakers out of sync.  When the circuit breakers were opened and closed, it created an enormous amount of stress that was enough to break parts off of the generator.  Within about three minutes of the test starting the generator had been destroyed and left smoking (Swearingen, Brunasso, 2013).  While a cyber attack destroying only one generator does not seem like a big deal, it is significant.  This experiment only applied to one generator.  Imagine if the cyber attack occurred on tens or hundreds of generators at the same time.  Within three minutes entire cities could be blacked out.  This could also cause a surge in demand on other energy stations that would short them out as well.

To prevent this from happening it is imperative to continually test the ICS for vulnerabilities and correct them as soon as possible.  Auditing can help by keeping the companies honest and preventing them from becoming complacent.  In the Marines Corps, there is a famous slogan that says, “Complacency Kills.”  What this means is that when a person gets lazy doing the same task over and over again, they begin to take shortcuts and skip steps in the daily task.  When that happens, accidents happen and people sometimes get killed.  This is the same for industrial control systems.  Auditing is becoming essential for the U.S. government and society as the U.S. relies more and more on the benefits of using ICS.

There are two organizations that specialize in protecting ICS.  The Industrial Control System-Cyber Emergency Response Team (ICS-CERT) operates within the National Cybersecurity and Integration Center (NCCIC), which is a division within the Department of Homeland Security (ICS-CERT, n.d.).  ICS-CERT, along with the NCCIC has created a document called the “Seven Strategies to Defend ICSs.

The first step calls for implementing Application Whitelisting (NCCIC, 2015).  By whitelisting what applications are allowed on the network, it can help detect malware uploaded by hackers.  This is especially helpful when the nature of the network is static and does not change much.

Step two is to ensure proper configuration and patch management.  By updating and patching the systems in a timely fashion, companies can avoid attacks that would have been easily prevented had they been patched (NCCIC, 2015).  In fact, the recent attack on Equifax was due to a vulnerability that had been patched three months before their systems were attacked (Newman, 2017).

The third step is to reduce your attack surface area.  By disabling or uninstalling any services or programs that you do not use you limit what is available to the hacker to exploit.  Companies should also isolate the ICS networks from all untrusted network to include the internet (NCCIC, 2015).

The fourth step is to build a defendable network.  By segmenting networks companies can limit the damage to networks if they are compromised.  If the whole network has access to itself, it will only take one compromised computer to affect the entire network.  However, if the network is separated into separate smaller networks, the damage will be limited to only the smaller network (NCCIC, 2015).

The fifth step is to manage authentication.  To accomplish this company should use best practices for managing authentication such as strong password policies, multi-factor authentication, and “least privilege” (NCCIC, 2015).

The sixth strategy is to implement secure remote access.  The best strategy is to not allow remote access.  If that is not possible the next best solution is to only allow for monitoring and not executing.  If users must have executable permissions, then access should be restricted to a single access point.  All other pathways should be restricted.  Again, companies should use multifactor authentication (NCCIC, 2015).

The final seventh step is to monitor and respond.  Installing Intrusion Detection Systems (IDS), Intrusion Prevention Systems (IPS), and continuously monitoring log files, companies can catch problems before they become security incident.  Companies should also develop response plans and regularly test those plans (NCCIC, 2015).

The National Institute of Standards and Technology (NIST) is also an important organization that has produced some unique reports about cybersecurity.  Specifically, the NIST SP 800-82 Revision 2 Guide to Industrial Control Systems (ICS) Security and the Framework for Improving Critical Infrastructure Cybersecurity.  The NIST SP 800-82 R2 guide discusses ICS Risk Management, ICS Security Architectures, and how to apply security controls to ICS (NIST, 2015).

NIST’s Risk Management Framework (RMF) is a six-step process that includes the steps: Categorize Information Systems, Select Security Controls, Implement Security Controls, Assess Security Controls, Authorize Information Systems, and Monitor Security Controls.  By implementing the six-step cycle RMF, you can identify vulnerabilities and select controls to mitigate those vulnerabilities based on the company’s priorities (NIST, 2015).

NIST’s Framework for Improving Critical Infrastructure Cybersecurity is also another essential document for improving security for ICS’s.  The basic framework consists of the five functions of Identify, Protect, Detect, Respond, and Recover (NIST, 2014).  Each function also consists of several categories and sub-categories that get more specific and technical.  The best part of the framework is the Informative References where a specific section of standards, guidelines, and practices are tied to each sub-category.  These references include COBIT, ISA, ISO/IEC 27001, and NIST SP 800-53 R4 and the specific sections that they apply to (NIST, 2014).  This is an indispensable guide for ICS auditors.  The framework also has a tiered strategy that can help companies understand how developed and mature their risk management practices are.  The four tiers are Partial (Tier 1), Risk-Informed (Tier 2), Repeatable (Tier 3), and Adaptive (Tier 4).  Tier 1 is primarily for companies that only have an understanding of risk management and are mostly reactive to any problems.  Tier 2 is slightly more mature where the company has formal processes in place.  Tier 3 is actively using and monitoring their risk management processes and making improvements when needed.  Finally, tier 4 is when a company has fully mature risk management processes where they have learned from theirs and other mistakes and are continually adapting their processes as the security situation changes (NIST, 2014).

There are some special considerations when working and auditing Industrial Control Systems.  It is essential to understand that many of these systems were designed and installed before the internet became common.  Often what happens is that when technology advanced enough components were “bolted on” to connect the control systems to the internet.  When that happened, the systems that were not designed to be connected to the internet became utterly vulnerable.  Some problems that became apparent were that most of the communications were sent in plain text.  Other problems were that the systems were hard-coded with default passwords that could not be changed.  While this may look like a glaring error today, five to ten years ago it was not a problem because these systems were not connected to the internet.

Even today there are issues with medical devices that are connected to the networks that have hard-coded passwords.  An alert by ICS-CERT named ICS-ALERT-13-164-01 stated that there was a vulnerability in over 300 medical devices spanning 40 vendors that had hard-coded passwords (ICS-CERT, 2013).  Another issue with industrial control systems is that the hardware specs are often only enough to run the software.  There is no room for improvement to add encryption.  Encryption and Control Systems do not mix well (Sample, 2006).  Encryption often using more bandwidth and memory than what the ICS can provide.  The final issue that also comes with auditing control systems is that they often use vulnerable software and protocols (Sample, 2006).  Windows XP stopped being support several years ago. However, many ICS still use Windows XP and are incapable of upgrading to more secure versions of windows.

Many of these vulnerabilities can be mitigated by using a proper security architecture that blocks insecure control systems from untrusted networks or the internet.  Port Management can be implemented, and any other controls can be added that makes it more difficult for hackers to gain access.  Companies can look into upgrading their control systems and where they cannot they can look into upgrading network components and software that have better capabilities.

After understanding about some of the standards and vulnerabilities of ICS a review can be done of an actual audit.  In February 2017, the NASA Office of Inspector General (IG) conducted an audit of NASA’s critical and supporting infrastructure.  The IG office has conducted this audit as well as 21 other audit reports because NASA has moved steadily moved from older isolated and manually controlled operational technology to more modern technology where systems are controlled over networks (Leatherwood, 2017).  What the IG office has found is that NASA still has several deficiencies and significant issues concerning its critical infrastructure.  There are two main concerns in the audit report.  The first concern is that NASA lacks comprehensive security planning for managing risk to its Operation Technology Systems (OT).  Examples of these OT systems are HVAC systems, tracking and telemetry systems, command and control systems.  The second concern is that NASA’s critical infrastructure assessment and protection could benefit from improved OT security (Leatherwood, 2017).

For NASA’s comprehensive security planning for managing risk to its OT systems, there are sever issues where NASA could use improvement.  First is that NASA needs to do better at defining what OT systems it has.  There was inconsistency in how the OT systems were defined when looking at several different NASA Centers.  Often these OT systems would be defined as critical infrastructure at one location and other locations not listed at all.  There needs to be consistency in how the OT systems are defined (Leatherwood, 2017).

Another finding is that NASA did not follow the NIST guidance on how to categorize its OT systems.  NASA failed to make distinctions in OT systems and IT systems.  When they fail to make distinctions, NASA ends up grouping systems with different security risks into a single group.  This makes it more challenging to make risk assessments and implement the right security controls (Leatherwood, 2017).

Awareness and training is an area that NASA could improve in.  The auditors visited five NASA centers including the NASA Headquarters and interviewed more than two dozen employees.  The auditors discovered that NASA does not require role-based OT specific training.  Most employees receive IT training, however.  By not including OT training along with the regular IT security training employee will be less able to identify vulnerable systems.  An example would be a building HVAC system.  If an employee did not recognize the HVAC system as a high-risk system, they might not take the proper steps to prevent the system from being compromised.  In this case, the employee could fail to lock or security a door that would lead to the HVAC controls.  If a hacker gained access to the HVAC system, they could shut it off potentially putting the IT systems at risk (Leatherwood, 2017).

Lastly, NASA had several very easily exploitable risks that could be prevented with administrative controls.  For the OT systems, there was a lack of internal monitoring, auditing, and log file management.  Most of the systems were checked manually by NASA personnel.  There were no controls in place to monitor physical or logical isolation from the main networks.  NASA also used group accounts.  This created vulnerability in two ways.  The first way was that by using group accounts, there was no way to know who accessed a system.  If anything went wrong, there was no way to attribute the risk to a single employee.  The second vulnerability was an insider attack.  If an employee was fired and the credentials were not changed that fired employee still can gain access to the OT systems.  Most of these issues can be identified and corrected by using some of the known best practices by implementing the proper security controls.  There just needs to be a centralized and control effort so that all of the NASA offices are using the same language and playbook (Leatherwood, 2017).

Industrial Control Systems have come a long way since the 1960s.  These systems will continue to evolve and get more complicated as time goes on.  Luckily, there are organizations that provide several excellent documents in how to protect the ICS from hackers.  There are plenty of examples of what could go wrong.  Companies just need to use what information is available and implement it in their networks.  If companies fail to take ICS security seriously, there will eventually be a significant attack on the nations critical infrastructure that could put thousands of lives at risk.

References

Swearingen, Michael. Brunasso, Steven. Et.Al.  (September 2013).  What you need to know (and don’t) about the Aurora Vulnerability.  Retrieved from http://www.powermag.com/what-you-need-to-know-and-dont-about-the-aurora-vulnerability/?printmode=1

ICS-CERT. (n.d.).  About the Industrial Control Systems Cyber Emergency Response Team.  Retrieved from https://ics-cert.us-cert.gov/About-Industrial-Control-Systems-Cyber-Emergency-Response-Team

NCCIC.  (December 2015).  Seven Strategies to Defend ICSs.  Retrieved from https://ics-cert.us-cert.gov/sites/default/files/documents/Seven%20Steps%20to%20Effectively%20Defend%20Industrial%20Control%20Systems_S508C.pdf

Newman, Lily Hay.  (September 2017).  Equifax Officially Has No Excuse.  Retrieved from https://www.wired.com/story/equifax-breach-no-excuse/

NIST.  (February 2014).  Framework for Improving Critical Infrastructure Cybersecurity.  Retrieved from

https://www.nist.gov/sites/default/files/documents/cyberframework/cybersecurity-framework-021214.pdfNIST. (May 2015).  NIST Special Publication 800-82 Revision 2:  Guide to Industrial Control Systems (ICS) Security.  Retrieved from http://nvlpubs.nist.gov/nistpubs/SpecialPublications/NIST.SP.800-82r2.pdf

ICS-CERT. (June 2013).  Alert (ICS-ALERT-13-164-01) Medical Devices Hard-Coded Passwords.  Retrieved from https://ics-cert.us-cert.gov/alerts/ICS-ALERT-13-164-01

Sample, James.  (2006).  Challenges of Securing and Auditing Control Systems.  Retrieved from http://www.isacala.org/doc/ISACALA_SCADA_Presentation_FinalJamey.pdf

Leatherwood, James.  (February 2017).  Industrial Control System Security Within NASA’s Critical And Supporting Infrastructure.  Retrieved from https://oig.nasa.gov/audits/reports/FY17/IG-17-011.pdf

Older Posts »

Powered by WordPress