WWW.THESES.XLIBX.INFO
FREE ELECTRONIC LIBRARY - Theses, dissertations, documentation
 
<< HOME
CONTACTS



Pages:   || 2 | 3 | 4 |

«Abstract. Rigorous scientific experimentation in system and network security remains an elusive goal. Recent work has outlined three basic ...»

-- [ Page 1 ] --

Generating Client Workloads and High-Fidelity

Network Traffic for Controllable, Repeatable

Experiments in Computer Security

Charles V. Wright, Christopher Connelly, Timothy Braje,

Jesse C. Rabek, Lee M. Rossey, and Robert K. Cunningham

Information Systems Technology Group

MIT Lincoln Laboratory

Lexington, MA 02420

{cvwright,connelly,tbraje,lee,rkc}@ll.mit.edu, jesrab@alum.mit.edu

Abstract. Rigorous scientific experimentation in system and network

security remains an elusive goal. Recent work has outlined three basic requirements for experiments, namely that hypotheses must be falsifiable, experiments must be controllable, and experiments must be repeatable and reproducible. Despite their simplicity, these goals are difficult to achieve, especially when dealing with client-side threats and defenses, where often user input is required as part of the experiment. In this paper, we present techniques for making experiments involving security and client-side desktop applications like web browsers, PDF readers, or host-based firewalls or intrusion detection systems more controllable and more easily repeatable. First, we present techniques for using statistical models of user behavior to drive real, binary, GUI-enabled application programs in place of a human user. Second, we present techniques based on adaptive replay of application dialog that allow us to quickly and efciently reproduce reasonable mock-ups of remotely-hosted applications to give the illusion of Internet connectedness on an isolated testbed. We demonstrate the utility of these techniques in an example experiment comparing the system resource consumption of a Windows machine running anti-virus protection versus an unprotected system.

Keywords: Network Testbeds, Assessment and Benchmarking, Traffic Generation.

1 Introduction The goal of conducting disciplined, reproducible, “bench style” laboratory research in system and network security has been widely acknowledged [1,2], but remains difficult to achieve. In particular, Peisert and Bishop [2] outline three This work was supported by the US Air Force under Air Force contract FA8721-05C-0002. The opinions, interpretations, conclusions, and recommendations are those of the authors and are not necessarily endorsed by the United States Government.

Work performed as a student at MIT. The author is now with Palm, Inc.

S. Jha, R. Sommer, and C. Kreibich (Eds.): RAID 2010, LNCS 6307, pp. 218–237, 2010.

c Springer-Verlag Berlin Heidelberg 2010 Generating Client Workloads and High-Fidelity Network Traffic 219 basic requirements for performing goodexperiments in security: (i) Hypotheses must be falsifiable—that is, it must be possible to design an experiment to either support or refute the hypothesis. Therefore, the hypothesis must pertain to properties that are both observable and measurable. (ii) Experiments must be controllable; the experimenter should be able to change only one variable at a time and measure the change in results. (iii) Finally, experiments should be both repeatable, meaning that the researcher can perform them several times and get similar results, and reproducible, meaning that others can recreate the experiment and obtain similar results.

Unfortunately, designing an experiment in system or network security that meets these requirements remains challenging. Current practices for measuring security properties have recently been described as “ad-hoc,” “subjective,” or “procedural” [3]. Experiments that deal primarily with hardware and software may be extremely controllable, and recent work [4,5,6,7,8] has explored techniques for deploying and configuring entire networks of servers, PCs, and networking equipment on isolated testbeds, disconnected from the Internet or other networks, where malicious code may safely be allowed to run with low risk of infecting the rest of the world. However, the recent shift in attacks from the server side to the client side [9,10,11,12] means that an experiment involving any one of many current threats, such as drive-by downloads [13] cross-site scripting, or techniques for detecting and mitigating such intrusions, must account for the behavior of not only the hardware and software of the computing infrastructure itself, but also the behavior of the human users of this infrastructure. Humans remain notoriously difficult to control, and experiments using human subjects are often expensive, time consuming, and may require extensive interaction with internal review boards.

Repeatability of experiments on the Internet is difficult due to the global network’s scale and its constant state of change and evolution. Even on an isolated testbed, repeatability is hampered by the sheer complexity of modern computer systems. Even relatively simple components like hard disks and Ethernet switches maintain state internally (in cache buffers and ARP tables, respectively), and many components perform differently under varying environmental conditions e.g. temperature. Many recent CPUs dynamically adjust their clock frequency in reaction to changes in temperature, and studies by Google suggest that temperature plays an important role in the failure rates of hard disk drives [14]. Reproducibility is even harder. It is unclear what level of detail is sufficient for describing the hardware and software used in a test, but current practices in the community likely fall short of the standards for publication in the physical sciences.





The contributions of this paper address two of the above requirements for performing scientific experiments in security. Specifically, we describe techniques that enable controllable, repeatable experiments with client-side attacks and defenses on isolated testbed networks. First, we present techniques for using statistical models of human behavior to drive real, binary, GUI-enabled application programs running on client machines on the testbed, so that tests can be 220 C.V. Wright et al.

performed without the randomness or privacy concerns inherent to using human subjects. Second, we present adaptive replay techniques for producing convincing facsimiles of remotely-hosted applications (e.g. those on the World Wide Web) that cannot themselves be installed in an isolated testbed network, so that the client-side applications have something to talk to. In doing so, we generate workloads on the hosts and traffic on the network that are both highly controllable and repeatable in a laboratory testbed setting.

On the client side, our approach is to construct a Markov chain model for the way real users interact with each application. Then, during the experiment, we use the Markov chains to generate new event streams similar in distribution to those generated by the real users, and use these to drive the applications on the testbed. This provides a realistic model of measured human behavior, offers variability from trial to trial, and provides an experimenter with the ability to change model parameters to explore new user classes. It also generates a reasonably realistic set of workloads on the host, in terms of running processes, files and directories accessed, open network ports, system call sequences, and system resource consumption (e.g. CPU, memory, disk). Many of these properties of the system are important for experiments involving defensive tools like firewalls, virus scanners, or other intrusion detection systems because they are used by such systems to detect or prevent malicious behavior. Furthermore, because we run unmodified application program binaries on the testbed hosts, we can closely replicate the attack surface of a real network and use the testbed to judge the effectiveness of various real attacks and defenses against one another.

Using real applications also allows us to generate valid traffic on the testbed network, even for complicated protocols that are proprietary, undocumented, or otherwise poorly understood. We discuss related work in more detail in the following section, but for now it suffices to say that almost all existing work on synthetically generating network traffic focuses on achieving realism at only one or two layers of the protocol stack. In contrast, our approach provides realistic traffic all the way from the link layer up to and including the contents of the application layer sessions.

For example, by emulating a user replying to an email, with just a few mouse click events, we can generate valid application-layer traffic in open protocols like DNS, IMAP, and LDAP, proprietary protocols including SMB/CIFS, DCOM, and MAPI/RPC (Exchange mail). This is, of course, in addition to the SMTP connection used to send the actual message. Each of these connections will exhibit the correct TCP dynamics for the given operating system and will generate the proper set of interactions at lower layers of the stack, including DNS lookups, ARP requests, and possibly Ethernet collisions and exponential backoff.

Moreover, if a message in the user’s inbox contains an exploit for his mail client (like the mass-mailing viruses of the late 1990s and early 2000s), simply injecting a mouse click event to open the mail client may launch a wave of infections across the testbed network.

For the case where the actual applications cannot be installed on the isolated test network, we present techniques based on adaptive replay of application Generating Client Workloads and High-Fidelity Network Traffic 221 dialog that allow us to quickly and efficiently reproduce reasonable mock-ups that make it appear across the network as if the real applications were actually running on the testbed. These techniques are particularly useful for creating a superficially realistic version of the modern World Wide Web, giving the illusion of connectedness on an isolated network.

To illustrate the utility of these techniques, we perform a simple experiment that would be labor intensive and time consuming to conduct without such tools. Specifically, we investigate the performance impact of open source antivirus (AV) software on client machines. Conventional folk wisdom in the security community has been that AV products incur a significant performance penalty, and this has been used to explain the difficulty of convincing end users to employ such protection. Surprisingly, relatively little effort has been put in to quantifying the drop in performance incurred, perhaps due to the difficulty of performing such a test in a controllable and repeatable manner.

The remainder of the paper is organized as follows. In Section 2, we review related work in network testbeds, automation of GUI applications, modeling user behavior, and network traffic generation. In Section 3, we present our techniques for driving real binary applications and for crafting reasonable facsimiles of networked applications that we cannot actually install on the testbed. In Section 4, we walk through a simple experiment to demonstrate the utility of these techniques and to highlight some challenges in obtaining repeatable results. Finally, we conclude in Section 5 with some thoughts on future directions for research in this area.

2 Related Work

Several approaches for configuring, automating, and managing network laboratory testbeds have recently been proposed, including Emulab [4], FlexLab [5], ModelNet [6], and VINI [7]. Our group’s LARIAT testbed platform [8] grew out of earlier work in the DARPA intrusion detection evaluations [15,16] and was designed specifically for tests of network security applications. More recently, along with others in our group, two of the current authors developed a graphical user interface for testbed management and situational awareness [17] for use with LARIAT. The DETER testbed [18] is built on Emulab [4] and, like LARIAT, is also geared toward network security experiments. The primary contribution of this paper, which is complementary to the above approaches, is to generate client-side workloads and network traffic for experiments on such testbeds. The techniques in Section 3.1 were first described in the fourth author’s (unpublished) MIT Master’s thesis [19]. USim, by Garg et al. [20], uses similar techniques for building profiles of user behavior, and uses scripted templates to generate data sets for testing intrusion detection systems.

Our server-side approach for emulating the Web is similar to the dynamic application layer replay techniques of Cui et al. [21,22] and Small et al. [23]. Like our client-side approach, the MITRE HoneyClient [24] and Strider HoneyMonkeys from Microsoft Research [25] drive real GUI applications, but that work 222 C.V. Wright et al.

focuses narrowly on automating web browsers to discover new vulnerabilities and does not attempt to model the behavior of a real human at the controls.

Software frameworks exist for the general-purpose automation of GUI applications, including autopy [26] and SIKULI [27], but these also require higher-level logic for deciding which commands to inject. PLUM [28] is a system for learning models of user behavior from an instrumented desktop environment. Simpson et al. [29] and Kurz et al. [30] present techniques for deriving empirical models of user behavior from network logs.

There is a large body of existing work on generating network traffic for use on testbeds or in simulations, but unfortunately most of these techniques were not designed for security experiments. Simply replaying real traffic [31,32] does not allow for controllable experiments. Other techniques for generating synthetic traffic based on models learned from real traffic [33,34,35,36,37] can match several important statistical properties of the input trace at the Network and Transport layers. However, because these approaches do not generate application layer traffic, they are not compatible with many security tools like content-based filters and intrusion detection or prevention systems, and they cannot interact with real applications on a testbed. Sommers et al. [38] present a hybrid replay-synthesis approach that may be more appropriate for some experiments in security. Mutz et al. [39], Kayacik and Zincir-Heywood [40], and other work by Sommers et al.

[41] generate traffic specifically for the evaluation of defensive tools.

Commercial products from companies including Ixia, BreakingPoint, and Spirent can generate application-layer traffic, but their focus is on achieving high data rates rather than realistic models of individual user behavior, and their implementations do not necessarily exhibit the same attack surface as the real applications.



Pages:   || 2 | 3 | 4 |


Similar works:

«Research and Practice in Technology Enhanced Learning Vol. 8, No. 1 (2013) 117-128 Ó Asia-Pacific Society for Computers in Education LEARNING LOG NAVIGATOR: SUPPORTING TASK-BASED LEARNING USING UBIQUITOUS LEARNING LOGS KOUSUKE MOURI, HIROAKI OGATA, MENGMENG LI, BIN HOU, UOSAKI NORIKO, SONGRAN LIU The University of Tokushima The Tokushima City, 770-8506, Japan mourikousuke@gmail.com, hiroaki.ogata@gmail.com, lemonrain99@gmail.com, myvstar@gmail.com, orchard.place@gmail.com, lb90518@gmail.com...»

«CONTAINMENT: RELEVANT OR RELIC? A thesis presented to the Faculty of the U.S. Army Command and General Staff College in partial fulfillment of the requirements for the degree MASTER OF MILITARY ART AND SCIENCE Strategy by ROBERT J. TEAGUE, LCDR, USN B.A., University of North Florida, Jacksonville, Florida, 1996 Fort Leavenworth, Kansas 2011-01 Approved for public release; distribution is unlimited. Form Approved REPORT DOCUMENTATION PAGE OMB No. 0704-0188 Public reporting burden for this...»

«MARYLAND TRANSPORTATION AUTHORITY BOARD MEETING THURSDAY, DECEMBER 18, 2014 2310 BROENING HIGHWAY, BALTIMORE, MD 21224 OPEN SESSION James T. Smith, Jr., Chairman MEMBERS ATTENDING: P. Jack Basso Rev. Dr. William C. Calhoun, Sr. Katrina J. Dennis, Esq. Mary Beyer Halsey William K. Hellmann Arthur Hock – (by telephone) A. Bradley Mims Michael Whitson STAFF ATTENDING: Jay Ayd David Chapin Percy Dangerfield Donna DiCerbo Trudy Edwards Bruce Gartner Steve Gwiazdowski Jaclyn Hartman Meshelle Howard...»

«1 Monikulttuurisuuden kehittäminen kirjaston palveluissa Ilari Lovio Artikkelin on laatinut VTK Ilari Lovio Aalto-yliopiston kauppakorkeakoulun ja Kuntaliiton yhteisessä Kirjastoinnovaatiot–hankkeessa syksyllä 2010. Kirjoittaja opiskelee sosiologiaa Helsingin yliopiston maisteriohjelmassa The Master's Degree Programme in Ethnic Relations, Cultural Diversity and Integration (ERI), Centre for Research on Ethnic Relations and Nationalism (CEREN). Tämä artikkeli käsittelee...»

«GLOBALIZATION AND HIGHER EDUCATION Jaishree K. Odin and Peter T. Manicas, Editors (Honolulu, HI: University of Hawai’i Press, 2004) Available from Amazon.com $27.00 (Pb) CONTENTS Foreward Deane Neubauer, Vice-President for Academic Affairs University of Hawai’i at Manoa viii Acknowledgements x Introduction Peter T. Manicas and Jaishree K.Odin xi Part I The Larger Context Introduction 3 Chapter 1 “Globalization and Higher Education” Peter Wagner 6 Chapter 2 “The Withering Away of the...»

«Documento de trabajo del BID # IDB-WP-375 Inserción de firmas argentinas en cadenas globales de valor no orientadas hacia el mercado masivo Andrea González, Juan Carlos Hallak, Peter Schott, Tatiana Soria Genta. Noviembre 2012 Banco Interamericano de Desarrollo Sector de Integración y Comercio Inserción de firmas argentinas en cadenas globales de valor no orientadas hacia el mercado masivo Andrea González, Juan Carlos Hallak, Peter Schott, Tatiana Soria Genta. Banco Interamericano de...»

«Licensing and Standards Division 32-02 Queens Boulevard Long Island City, NY 11101-2324 1893 Richmond Terrace Staten Island, NY 10302 +1 212 227 6324 tel, www.nyc.gov/tlc Guide to the 2009 For-Hire Vehicle Rule Changes: Advice on How to Follow the New Rules June 2009 The information in this guide is only a highlight of some of the changes recently made to the For-Hire Vehicle regulations (Chapter 6). For a copy of the complete FHV rules please visit our website at www.nyc.gov/tlc where you can...»

«Personal Care Day to Day Series What is Alzheimer’s disease and dementia? Alzheimer’s disease is the most common of a large group of disorders known as “dementias”. It is a disease of the brain, characterized by deterioration of thinking ability and of memory, caused by the progressive degeneration of brain cells. The disease also affects mood and emotions, behaviour and one’s ability to perform activities of daily living. There is no cure for Alzheimer’s disease at present nor can...»

«INFANT BEHAVIOR AND DEVELOPMENT 13, 249-256 (1990) BRIEF REPORT Parental Frameworks of Pleasure and Pride NADJA REISSLAND University of Oxford Seventeen mothers, whose infants ranged in age from 2 to 1 5 months, were asked to elicit in their children a simple emotion (pleasurel and a complex one (pride). Most of the mothers (15/171 believed that this could be done. An analysis of the mothers' elicitations revealed the existence of behavioral frameworks that were differentiated both with regard...»

«Vyiavlenie I Izoblichenie Prestupnika Most, and much the costs, can favorable application the products too before they is own Vyiavlenie I Izoblichenie Prestupnika to construct after you own the front and Arabia. When would it download a risk or rate hospitals in funding number and approach? A giant box complexity is over the online horror for owner a benefit as one try got as before wear with the done director beyond customer. It would help been a short-term internet but developed % in the...»

«HCSNZ. – Hotel Concierge Society Les Clefs D’Or New Zealand Incorporated PO Box 7617 Wellesley Street, Auckland, New Zealand www.hotelconciergesocietynz.org CONSTITUTION OF THE HCSNZ HOTEL CONCIERGE SOCIETY LES CLEFS D’OR NEW ZEALAND INCORPORATED PREAMBLE Whereas the Concierge standards represent only the finest principles of hospitality, we the Hotel Concierge Society Les Clefs d’Or New Zealand, do establish this society in keeping with the finest tradition of hospitality. ARTICLE I...»

«División de los Derechos de los Palestinos Agosto de 2009 Volumen XXXII, Boletín núm. 8 Boletín sobre las actividades del sistema de las Naciones Unidas y las organizaciones intergubernamentales relacionadas con la cuestión de Palestina Índice Página I. El Coordinador Especial de las Naciones Unidas deplora el desalojo por Israel de varios refugiados inscritos en el OOPS en Jerusalén Oriental............................. 3 II. Informe de la Alta Comisionada...»





 
<<  HOME   |    CONTACTS
2016 www.theses.xlibx.info - Theses, dissertations, documentation

Materials of this site are available for review, all rights belong to their respective owners.
If you do not agree with the fact that your material is placed on this site, please, email us, we will within 1-2 business days delete him.