URL has been copied successfully!
Security leaders top 10 takeaways for 2024
URL has been copied successfully!

Collecting Cyber-News from over 60 sources

This year has been challenging for CISOs, with a growing burden of responsibility, the push to make cybersecurity a business enabler, the threat of legal liability for security incidents, and an expanding attack landscape.As the year comes to a close, CISOs reflect on some of the takeaways that have shaped the security landscape in 2024. AI has shaken up cybersecurity, driving the development of new tools while also putting this powerful technology in the hands of hackers and cybercriminals, says Jake Williams, faculty at IANS Research and VP of R&D at Hunter Strategy.However, for organizations that rushed to adopt AI, this largely untested technology brings its own risks and created new vulnerabilities instead of solutions.Williams has worked with several organizations that jumped into using AI coding assistants and found they were shipping code faster in their pilot groups. “In most cases, they deployed these tools more widely, and usually without additional developer training, and are finding higher defect rates in code since moving to AI coding assistants.”Most teams will take longer to resolve issues in AI-generated code. However, some organizations are finding that using AI coding assistants only for specific tasks, such as remediating vulnerabilities discovered with SAST, doesn’t increase the defect rate, Williams tells CSO.Instead of asking whether AI coding assistants are bad, the question should be around the appropriate use cases.AI-generated code is a narrow, highly structured, easily measured use case, a task where it excels. Given the issues in this application, Williams suggests it indicates there are likely to be other problems with AI implementations that aren’t so obvious. “That we aren’t seeing overwhelming success here indicates there are likely hidden failures elsewhere in our AI adoptions that are simply harder to measure,” he says.

SEC rule changes: Best to err on the side of transparency

The US Securities and Exchange Commission (SEC) 2023 rules around risk management, strategy, governance, and incident disclosure added more regulations and significant reporting requirements for security leaders of public companies.The impact has been felt this year, as corporate disclosure burdens have increased significantly, according to Kayne McGladrey, field CISO with hyperproof, who tracks the impact of regulatory changes.One of the most significant new rules, which has received the lion’s share of press attention, is the ‘materiality’ component, or the need to report “material” cybersecurity incidents to the SEC within four business days of discovery.At issue is whether the incident led to significant risk to the organization and its shareholders. If so, it’s defined as material and must be reported within four days of this determination being made (not its initial discovery).”Materiality extends beyond quantitative losses, such as direct financial impacts, to include qualitative aspects, like reputational damage and operational disruptions,” he says.McGladrey says the SEC’s materiality guidance underscores the importance of investor protection in relation to cybersecurity events and, if in doubt, the safest path is reporting. “If a disclosure is uncertain, erring on the side of transparency safeguards shareholders,” he tells CSO.

Smaller businesses are upping their security game

Leaders at smaller organizations are no longer paying lip service to cybersecurity and compliance, they’re making smaller investments in security and compliance strategy earlier to ensure their companies are resilient as they grow, according to Carlota Sage, founder and community CISO at Pocket CISO.As a virtual or fractional CISO service, Sage has observed startups engaging vCISO services earlier, in pre-seed and Series A stage and, in some cases, before they’ve finalized their minimum viable product.”Small technology consulting and boutique software development groups are looking for ISO 27001 certifications to ensure they can continue serving their larger customers,” she tells CSO.In addition, leaders of mid-sized (300-500 employees) companies are looking for confirmation outside of an audit that their security and compliance programs are following best practices and in good shape.

Organizations are focusing on transparency and open communication with customers

This year has seen the emergence of trust programs at major cloud service providers and Fortune 100 companies, according to George Gerchow, faculty at IANS Research and interim CISO/head of trust at MongoDB.Major outages from companies like Snowflake and CrowdStrike, and multiple incidents involving Okta have eroded trust in cloud service providers, Gerchow says. “Traditional security questionnaires and shared responsibility models aren’t cutting it anymore, and we’ve known that for a while,” he says.The lack of transparency surrounding major outages and incidents has created a lot of anxiety and, as a result, cloud adoption has slowed down. “Yet, the reality is that the tools people need are increasingly cloud-based,” he tells CSO.In response, some organizations are focusing on building the Office of Trust, dedicated to transparency and open communication with customers. “These efforts are about getting ahead of the trust crisis, with VPs of security actively discussing emerging threats and how to build confidence. Everyone is seeking that transparency,” he says.Gerchow believes these offices will function as a direct line for companies to better protect themselves and their customers in the event of an incident. “As investment in AI continues to grow, trust and collaboration between teams will be more crucial than ever. The only way forward is to establish a foundation of trust,” he says.

Third-party security scrutiny improved, but needs more work

“Finally, and thankfully, progress has been made in recognizing that our existing process of requiring vendors to complete pages and pages of questionnaires to get ‘verified for business’ by customers is broken,” says Olivia Rose, CISO and founder at Rose CISO Group.On the vendor side, these questionnaires are time-consuming, carrying a heavy onus on team resources, according to Rose. “On the customer-side, we expect CISOs, one of the most paranoid groups on the planet, to hand over access to their sensitive data and environments based on a few hundred answers provided by the vendor, along with a SOC2 report,” she says.Despite these processes, the number of third- and fourth-party breaches have not declined in frequency, further supporting the notion that the whole process is broken, Rose says.AI has improved how teams provide responses to these questionnaires, allowing them to do so more quickly, accurately and painlessly. Even so, there’s still scope for improvement and the potential to save a considerable amount of time and resources that are spent on this function.”I’m crossing my fingers and remain hopeful that in 2025, a startup will emerge with a more powerful and concrete way for customers to evaluate and verify that their connecting vendors have the expected level of security,” she says.

More incident response staff to handle the increase in phishing attacks

Phishing methods have continued improving throughout 2024, creating a growing burden on detection teams. “I’ve seen a trend in phishing attacks where cybercriminals no longer send a single phishing email to thousands of our users,” says Tammy Loper, VP, information technology and security at University of Tampa.Instead, cybercriminals are customizing each of the thousand phishing emails sent at once to make it difficult or next to impossible for incident responders to shut down the attack nearly as quickly, says Loper. If a phishing email is detected after it’s received (because it’s evaded the email security detection) and the user interacts with it, because of this subtle change, incident handlers can no longer quickly purge it from all the inboxes that may have received the same exact phishing message. “They now have to look for similarly constructed phishing emails with tiny differences that render each one a different and unique threat to our end users and purge each one separately,” she says. “Cybercriminals always improve at evading detection and creating new challenges for information security teams.”   This has led to the need to increase incident response staff to handle an exponentially larger amount of unique security alerts or threats.

AI revealed unforeseen security threats

This year showed potential security issues related to AI can be hard to predict and it’s always easier to connect the dots after the fact.Vandy Hamidi, CISO at BPM, says it’s already had a significant impact in many forms, but IT and infosec teams need to stay on top of security threats and manage them as soon as they emerge. “There are predictions galore for the future of humanity post-AI, but the real outcomes won’t be evident until they’re at our front door,” he tells CSO.Security professionals should guide and educate colleagues, while also educating themselves about this new class of risks as soon as possible. It will also demand agility to optimize the impact of the technology while being ready to adapt as security risk changes.

CISOs are aware deepfakes are a new class of risk

Easy access to deepfakes, even authorized deepfakes, which companies may utilize to rapidly produce video content or create an interactive bot is a new class of threat, says Hamidi. “What happens if a realistic bot can be used to emulate a real person in real-time?”Deepfakes create compliance and data privacy issues around who owns the likeness and security concerns if a trusted individual’s likeness or voice is used to perpetrate a fraud, he says.Mandy Andress, CISO at Elastic, says deepfakes to become more commonplace, spurred on by improvement with generative AI.This year has shown that security teams must play an instrumental role in countering deepfake attacks by helping organizations better understand the risks and educating employees. “Using AI and machine learning can help supercharge efforts, helping teams make decisions and counter attacks by leveraging massive amounts of data,” she says.

Third-party threats have become more complex and diffuse

Growing third-party dependency continues to incentivize breaches that compromise user communities and at the same time, they’ve become more complex across different environments, according to Bethany De Lude, CISO at The Carlyle Group.”As companies have adopted multi-cloud and SaaS-based business models, new challenges have emerged in managing risk across an information landscape defined by identity, and not a traditionally controlled edge,” she says.In response, De Lude believes that new, pragmatic approaches to data and vendor management will emerge that take into account the changing boundaries and the way security increasingly centers on who has access to data and systems, rather than where those systems are located.”They’ll need to address the way modern businesses operate across a complex, interconnected and distributed environment,” she says. 

AI and automation reshaped vulnerability management

This year showed how new tools that leverage AI for automated Q/A and regression testing at scale are reducing the burden on teams and accelerating safe, effective remediation processes, according to Rick Doten, VP, information security and CISO at Carolina Complete Health.”These remediation workflow tools support prioritization, normalization, and de-duplicating of findings to route them to the appropriate team, and even create tickets to assign to specific people,” he says.Although this can already be done with security orchestration, automation, and response (SOAR) tools, it requires people to create automation scripts and the process and workflow to support the automation.AI-backed tools address resource limitations and the challenge of responsibility to fix the findings across many teams that might have different remediation workflows and ticketing systems. “With the dynamic nature of cloud environments, it’s [AI tools are] important because we have tens of thousands of findings to be remediated in workloads,” Doten says.

First seen on csoonline.com

Jump to article: www.csoonline.com/article/3615186/security-leaders-top-10-takeaways-for-2024.html

Loading

Share via Email
Share on Facebook
Tweet on X (Twitter)
Share on Whatsapp
Share on LinkedIn
Share on Xing
Copy link