In our last "bring your own device" post we explored some of the key security, privacy and incident response issues related to BYOD. These issues are often important drivers in a company's decision to pursue a BYOD strategy and set the scope of personal device use within their organization. If the risks and costs associated with BYOD outstrip the benefits, a BYOD strategy may be abandoned altogether. One of the primary tools (if not the most important tool) for addressing such risks are BYOD-related policies. Sometimes these policies are embedded within an organization's existing security and privacy policy framework. More frequently, however, companies are creating separate personal device use policies that stand alone or work with/cross-reference existing company security, privacy and incident response polices. This post lays out the key considerations company lawyers and compliance personnel should take into account when creating personal device use policies and outlines some of the important provisions that are often found in such policies.
We have entered an era where our commercial transactions are increasingly being conducted online without any face-to-face interaction, and without the traditional safeguards used to confirm that a party is who they purport to be. The attenuated nature of many online relationships has created an opportunity for criminal elements to steal or spoof online identities and use them for monetary gain. As such, the ability of one party to authenticate the identity of the other party in an online transaction is of key importance.To counteract this threat, the business community has begun to develop new authentication procedures to enhance the reliability of online identities (so that transacting parties have a higher degree of confidence that the party on the other end of an electronic transaction is who they say they are). At the same time, the law is beginning to recognize a duty to authenticate. This blogpost post looks at two online banking breach cases to examine what courts are saying about authentication and commercially reasonable security.
Employees are increasingly using (and demanding to use) their personal devices to store and process their employer's data, and connect to their networks. This "Bring Your Own Device" trend is in full swing, whether companies like it or not. Some organizations believe that BYOD will allow them to avoid significant hardware, software and IT support costs. Even if cost-savings is not the goal, most companies believe that processing of company data on employee personal devices is inevitable and unavoidable.Unfortunately, BYOD raises significant data security and privacy concerns, which can lead to potential legal and liability risk. This blogpost identifies and explores some of the key privacy and security legal concerns associated with BYOD, including "reasonable" BYOD security, BYOD privacy implications, and security and privacy issues related to BYOD incident response and investigations.
As organizations of all stripes increasingly rely on cloud computing services to conduct their business, the need to balance the benefits and risks of cloud computing is more important than ever. This is especially true when it comes to data security and privacy risks. However, most Cloud customers find it very difficult to secure favorable contract terms when it comes to data security and privacy. While customers may enjoy some short term cost-benefits by going into the Cloud, they may be retaining more risk then they want (especially where Cloud providers refuse to accept that risk contractually). In short, the players in this industry are at an impasse. Cyber insurance may be a solution to help solve the problem.
In 2011, InfoLawGroup began its "Legal Implications" series for social media by posting Part One (The Basics) and Part Two (Privacy). In this post (Part Three), we explore how security concerns and legal risk arise and interact in the social media environment.There are three main security-related issues that pose potential security-related legal risk. First, to the extent that employees are accessing and using social media sites from company computers (or increasingly from personal computers connected to company networks or storing sensitive company data), malware, phishing and social engineering attacks could result in security breaches and legal liability. Second, spoofing and impersonation attacks on social networks could pose legal risks. In this case, the risk includes fake fan pages or fraudulent social media personas that appear to be legitimately operated. Third, information leakage is a risk in the social media context that could result in an adverse business and legal impact when confidential information is compromised.
Publicly traded businesses now have yet another set of guidelines to follow regarding security risks and incidents. On October 13, 2011 the Securities and Exchange Commission (SEC) Division of Corporation Finance released a guidance document that assists registrants in assessing what disclosures should be made in the face of cyber security risks and incidents. The guidance provides an overview of disclosure obligations under current securities laws - some of which, according to the guidance, may require a disclosure of cyber security risks and incidents in financial statements.
California's infamous SB 1386 (California Civil Code sections 1798.29 and 1798.82) was the very first security breach notification law in the nation in 2002, and nearly every state followed suit. Many states added their own new twists and variations on the theme - new triggers for notification requirements, regulator notice requirements, and content requirements for the notices themselves. Over the years, the California Assembly and Senate have passed numerous bills aimed at amending California's breach notification law to add a regulator notice provision and to require the inclusion of certain content. However, Governor Schwarzenegger vetoed the bills on multiple occasions, at least three times. Earlier this year, State Sen. Joe Simitian (D-Palo Alto) introduced Senate Bill 24, again attempting to enact such changes. Yesterday, August 31, 2011, Governor Brown signed SB 24 into law.
On July 20, 2011, the U.S. House of Representatives Energy and Commerce Committee's Trade Subcommittee approved the Secure and Fortify Electronic Data Act (the "SAFE Data Act"). The Act would require any business that maintains personal information to implement an information security program and notify affected individuals in the event of an information security breach. The SAFE Data Act would preempt the over 45 existing state information security and breach notification laws and task the Federal Trade Commission with developing information security rules implementing the Act.
As we move into 2011 it should be obvious that cloud computing is not a fad, but rather a computing model that is becoming ubiquitous. Cloud computing offers a slew of advantages including efficiency, instant scalability and cost effectiveness. However, these advantages must be balanced against the control organizations may lose over their information technology operations when they are reliant on a cloud provider to provide key processes. The issues that arise out of this loss of control are apparent when considering data breach response and liability in the cloud. When a cloud customer puts its sensitive data into the cloud it is completely reliant on the security and incident response processes of the cloud service provider in order to respond to a data breach. This situation poses many fundamental problems.
The Maine Supreme Court has rendered its opinion on the "damages" issue in the Hannaford Bros. consumer security breach lawsuit. Again, the plaintiffs have been unable to establish that they suffered any harm as a result of the Hannaford security breach. Specifically, the Court ruled that "time and effort" alone spent to avoid or remediate reasonably foreseeable harm do not constitute "a cognizable injury for which damages may be recovered." In this blogpost we take a closer look at the Court's rationale.
This blogpost is the third (and final) in our series analyzing the terms of Google's and Computer Science Corporation's ("CSC") cloud contracts with the City of Los Angeles. In Part One, we looked at the information security, privacy and confidentiality obligations Google and CSC agreed to. In Part Two, the focus was on terms related to compliance with privacy and security laws, audit and enforcement of security obligations, incident response, and geographic processing limitations, and termination rights under the contracts. In Part Three, we analyze what might be the most important data security/privacy-related terms of a Cloud contract (or any contract for that matter), the risk of loss terms. This is a very long post looking at very complex and interrelated contract terms. If you have any questions feel free to email me at dnavetta@infolawgroup.com
As many of our readers know, the International Association of Privacy Professionals (IAPP) will celebrate 10 years this Tuesday, March 16. In connection with that anniversary, the IAPP is releasing a whitepaper, "A Call For Agility: The Next-Generation Privacy Professional," tomorrow, March 15. I am honored that the IAPP has given me the opportunity to read and blog about the whitepaper in advance of its official release.