FloCon 2019 has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

General Session [clear filter]
Wednesday, January 9

1:30pm EST

Network Telescopes Revisited: From Loads of Unwanted Traffic to Threat Intelligence
Network telescope (a.k.a., darknet) is a monitored but otherwise unused IP space that should not receive any legitimate network traffic. In practice, a lot of packets can be observed in there: our network telescope deployed at NASK (Research and Academic Computer Network, Poland) which consists of more than 100 000 unused IP addresses gets about 30 million of packets per hour on average. This presentation will introduce a comprehensive system we developed to analyze malicious traffic on a large scale and produce actionable results in close to real time. We will present case studies where data from our network telescope is used for threat hunting and improving situational awareness.

Presentation plan:
1) Architecture and design
At the beginning, we will discuss basic concepts concerning the architecture of the system and present our approach to data analysis and aggregation.

2) Scanning activity and mass exploitation campaigns
As we are able to monitor a large number of IP addresses, it is possible to continuously observe and analyze trends in scanning activities. Just looking at the dynamics of target ports contributes to better situational awareness, but more in-depth analysis allows to reveal much more information. We will cover the following case studies:
a) Github Memcached DRDoS attack: can scanning patterns indicate an upcoming attack?
b) How publication of vulnerability PoCs or publication of the CVEs translate into observed exploitation campaigns.
c) Recognizing different groups responsible for the scanning activities by the analysis of their methods and technical capacities.

3) Denial of Service attacks
A significant part of the traffic we observe is backscatter generated by DoS attacks (for example TCP SYN or DNS floods) using spoofed source addresses. We are able to identify the victims and estimate duration and magnitude of attacks. We will show examples of interesting DoS attacks and demonstrate how data from network telescopes can be combined with other sources, like DRDoS honeypots, to obtain a global view on volumetric attacks on the internet.

4) Fingerprinting packet generation algorithms
Software for network scanning and DoS attacks (including malware) usually have custom code for generating packets. We will show how it is possible to analyze certain features of packets in the live traffic to automatically build signatures that can be used to fingerprint individual tools. This approach has been successfully applied to analysis of darknet traffic to create multiple signatures and to traffic from malware sandboxes to link some of the signatures to malware families.

Attendees will Learn:
Attendees will learn methods for deriving actionable threat intelligence from traffic collected through the network telescopes. We will explain how packet characteristics can be used to fingerprint network traffic (scanning or flooding) generated by particular malware families. The talk will have mostly practical focus, which should be useful for the members of CERTs/SOCs. From the researcher perspective, we will cover recent advancements in the analysis of network telescope traffic.

avatar for Piotr Bazydlo

Piotr Bazydlo

Head of Network Security Methods Team, Research and Academic Computer Network (NASK, Poland)
Piotr Bazydlo earned a master's degree from Warsaw University of Technology in the faculty of Electronics and Information Technology in 2016. His adventures with cybersecurity started in the NASK (Research and Academic Computer Network) as a researcher in the Network Security Methods... Read More →
avatar for Adrian Korczak

Adrian Korczak

Network Security Researcher, Research and Academic Computer Network (NASK, Poland)
Adrian Korczak is a network security researcher at Research and Academic Computer Network in Poland (NASK). He finished his BS in Network Systems at the University of California Irvine. His interests cover subjects like malware analysis, sandboxing, and DGA.
avatar for Pawel Pawliński

Pawel Pawliński

Principal Security Specialist, CERT Polska / NASK
Paweł Pawliński is a principal specialist at CERT.PL. His past job experience include data analysis, threat tracking, and automation. He is responsible for the design and implementation of the n6 platform for sharing security-related data and has also designed systems for large-scale... Read More →

Wednesday January 9, 2019 1:30pm - 2:00pm EST
Grand Ballroom 300 Bourbon St, New Orleans, LA 70130

2:00pm EST

Data as Evidence: Analysis of Logs for Litigation
Security goes well beyond the operational need to identify activity and decide whether it should be allowed to continue unencumbered, further scrutinized, or halted. When it comes to identifying responsible actors and making victims whole, remedies largely depend on criminal and civil adjudication. Successful prosecution and recovery of damages requires that data may be admitted as evidence into the legal record, that the means of analyzing the data withstand scrutiny, and that counsel, court, and jurors understand the story the data analyst finds. Furthermore, careless or myopic analysis used in real time security operations can have disastrous effects when the analysis is scrutinized in litigation.

In this presentation, we consider three case studies where the author led a team that analyzed system
logs, developed findings from the data that were relevant to the nature, scope, and severity of the alleged damage, and presented those results. We focus on the legal processes at work in securing data for analysis, methods for assessing and making use of data, the legal standards for offering expert opinion, and techniques for effectively presenting findings to legal professionals and lay jurors.  The cases are: 
  • Pharmatrak Privacy Litigation, United States Court of Appeals, First Circuit. 329 F.3d 9, in which plaintiffs alleged that pharmaceutical companies collected and sent personal information to third undisclosed third parties, in violation of their privacy policies. Forensic analysis of operational system logs led to critical findings that set standards for application of Federal wiretap statutes to web technology.
  • Ford, et al v. SBC Communications Inc. and SBC Internet Services, Inc.  d/b/a AT&T Internet Services, Inc., Circuit Court of St. Louis County (Missouri) Cause No. 06CC-003325, Division No. 6, in which disparate datasets were analyzed to find any cases where fees were collected for service that could not be provided. New York Stock Exchange Specialists Litigation, U.S. District Court, Southern District of New York, 405 F. Supp. 2d 2, in which the California Public Employees Retirement System (CalPERS) represented a class of investors who were allegedly harmed by securities specialists interpositioning themselves into otherwise executable trades. Analysis of tick-by-tick data from the systems that capture, relay, and display orders for the entire New York Stock Exchange over a five year period made possible findings needed to address the allegations. 
We discuss techniques for analysis and present examples from the case studies and conclude with
principles for data analysts both to support operational needs and to create the foundation to protect the organization in subsequent 

Attendees Will Learn: 
• When system data and analysis can be exposed to the scrutiny of an adverse party.
• How adverse parties can use the data in unexpected ways.
• How to identify both operational needs and long-term impacts of data collection, analysis, and presentation.
• How to present findings that will withstand not only internal questions but adversarial inquiry

avatar for Matthew Curtin

Matthew Curtin

Founder, Interhack Corporation
C. Matthew Curtin is the founder of Interhack Corporation, a computer expert firm based in Columbus, OH.  His practice helps attorneys and executives in high-stakes situations to understand and make use of computer technology and relevant data.  He has appeared as an expert witness... Read More →

Wednesday January 9, 2019 2:00pm - 2:30pm EST
Grand Ballroom 300 Bourbon St, New Orleans, LA 70130

2:30pm EST

Simulating Your Way to Security - One Detector at a Time
Covering a network with sensors is the first step towards security, but the massive flood of unprocessed, raw data points is frequently as paralyzing as having no visibility at all. To find actionable signal in the noise, one has to first define signal and noise. Threat detection must be motivated from a problem-first mentality, rather than a data-first mentality. Using this approach, "Big Data" problems tend to become small, relevant data problems, facilitating accurate and scalable detection solutions. We demonstrate the aforementioned problem-first approach with a case study of a password spray attack against an Active Directory (AD) system. We examine the nature of the attack: how it works, why it works and how its parameter settings interact with attacker style. In the resulting threat model, the "signal" is a sequence of failed authentication attempts from a particular device and the "noise" is the rest of the LDAP traffic.

To understand detectability of a dynamic password spray attack in a variable environment, the central idea is to gather samples of attack and merge them with records of the baseline enterprise network traffic. This may be accomplished by mapping timestamps and IP addresses of simulated and real flow data. For successful detection, signal must be discriminable from noise, so we demonstrate how to use time-series and probability density plots, combined with faceting and animation techniques, to visually examine the separation of signal from noise, across the sample of devices. Next, we show how constraints that come from details of the threat model suggest how to reduce the signal into a filtered, low-dimensional summary that preserves discriminability and allows detection to scale to a large network of devices. Finally, we show how the signal summary can be used to construct heuristic and statistical detection methods, and evaluate their efficacy, using accuracy and time-to-detection metrics.

Attendees will Learn:
Attendees will learn how to determine whether an attack is detectable and how to quantify detector’s quality using accuracy and time-to-detect. This can improve security operations by focusing investment on reliable detection.

avatar for Slava Nikitin

Slava Nikitin

Data Scientist, Columbus Collaboratory
Slava Nikitin is applying statistics and high-performance computing to bring the future back to now.  He is a Data Scientist at Columbus Collaboratory, working on statistical and machine learning modeling, software engineering, and interactive information displays.  He also is... Read More →

Wednesday January 9, 2019 2:30pm - 3:00pm EST
Grand Ballroom 300 Bourbon St, New Orleans, LA 70130