DLP (Data Loss Prevention) Interview Questions
In this DLP online quiz, we will cover topics related to dlp, data loss prevention, what is dlp, dlp in security, data loss prevention dlp, data leakage prevention, dlp security, dlp solutions, dlp tools, dlp solution, dlp team, dlp devices, dlp example, dlp requirements, data leak prevention policy, data loss prevention examples, data loss prevention software and so on.
Question |
Brief something about DLP and how does it work? |
Answer |
Data Loss Prevention is one of the most hyped, and least understood, tools in the security arsenal. With at least a half-dozen different names and even more technology approaches, it can be difficult to understand the ultimate value of the tools and which products best suit which environments. This document will provide the necessary background in DLP to help you understand the technology, know what to look for in a product, and find the best match for your organization. DLP is an adolescent technology that provides significant value for those organizations that need it, despite products that may not be as mature as in other areas of IT. The market is currently dominated by startups, but large vendors have started stepping in, typically through acquisition There is a lack of consensus on what actually compromises a DLP solution. Some people consider encryption or USB port control DLP, while others limit themselves to complete product suites. Securosis defines DLP as: Products that, based on central policies, identify, monitor, and protect data at rest, in motion, and in use, through deep content analysis. Thus the key defining characteristics are: • Deep content analysis • Central policy management • Broad content coverage across multiple platforms and locations DLP monitors the following types of data, including (but not limited to): Email Webmail HTTP (message boards, blogs and other websites) Instant Messaging Peer-to-peer sites and sessions FTP |
Question |
Is there any software used for DLP? |
Answer |
Yes,A DLP Product includes centralized management, policy creation, and enforcement workflow, dedicated to the monitoring and protection of content and data. The user interface and functionality are dedicated to solving the business and technical problems of protecting content through content awareness. |
Question |
Explain what type of data we can prevent?. |
Answer |
The goal of DLP is to protect content throughout its lifecycle. In terms of DLP, this includes three major aspects: Data At Rest includes scanning of storage and other content repositories to identify where sensitive content is located. We call this content discovery. For example, you can use a DLP product to scan your servers and identify documents with credit card numbers. If the server isn’t authorized for that kind of data, the file can be encrypted or removed, or a warning sent to the file owner. Data In Motion is sniffing of traffic on the network (passively or inline via proxy) to identify content being sent across specific communications channels. For example, this includes sniffing emails, instant messages, and web traffic for snippets of sensitive source code. In motion tools can often block based on central policies, depending on the type of traffic. Data In Use is typically addressed by endpoint solutions that monitor data as the user interacts with it. For example, they can identify when you attempt to transfer a sensitive document to a USB drive and block it (as opposed to blocking use of the USB drive entirely). Data in use tools can also detect things like copy and paste, or use of sensitive data in an unapproved application (such as someone attempting to encrypt data to sneak it past the sensors). DLP monitors the following types of data, including (but not limited to): Email Webmail HTTP (message boards, blogs and other websites) Instant Messaging Peer-to-peer sites and sessions • FTP |
Question |
Brief about the architecture of DLP? |
Answer |
Technical Architecture Protecting Data In Motion, At Rest, and In Use The goal of DLP is to protect content throughout its lifecycle. In terms of DLP, this includes three major aspects: • Data At Rest includes scanning of storage and other content repositories to identify where sensitive content is located. We call this content discovery. For example, you can use a DLP product to scan your servers and identify documents with credit card numbers. If the server isn’t authorized for that kind of data, the file can be encrypted or removed, or a warning sent to the file owner. • Data In Motion is sniffing of traffic on the network (passively or inline via proxy) to identify content being sent across specific communications channels. For example, this includes sniffing emails, instant messages, and web traffic for snippets of sensitive source code. In motion tools can often block based on central policies, depending on the type of traffic. • Data In Use is typically addressed by endpoint solutions that monitor data as the user interacts with it. For example, they can identify when you attempt to transfer a sensitive document to a USB drive and block it (as opposed to blocking use of the USB drive entirely). Data in use tools can also detect things like copy and paste, or use of sensitive data in an unapproved application (such as someone attempting to encrypt data to sneak it past the sensors). Data in Motion Many organizations first enter the world of DLP with network based products that provide broad protection for managed and unmanaged systems. It’s typically easier to start a deployment with network products to gain broad coverage quickly. Early products limited themselves to basic monitoring and alerting, but all current products include advanced capabilities to integrate with existing network infrastructure and provide protective, not just detective, controls. Network Monitor At the heart of most DLP solutions lies a passive network monitor. The network monitoring component is typically deployed at or near the gateway on a SPAN port (or a similar tap). It performs full packet capture, session reconstruction, and content analysis in real time. Performance is more complex and subtle than vendors normally discuss. First, on the client expectation side, most clients claim they need full gigabit Ethernet performance, but that level of performance is unnecessary except in very unusual circumstances Since few organizations are really running that high a level of communications traffic. DLP is a tool to monitor employee communications, not web application traffic. Realistically we find that small enterprises normally run under 50 MByte/s of relevant traffic, medium enterprises run closer to 50-200 MB/s, and large enterprises around 300 MB/s (Maybe as high as 500 in a few cases). Because of the content analysis overhead, not DLP Every product runs full packet capture. You might have to choose between pre-filtering (and Thus missing non-standard traffic) or buying more boxes and load balancing. Also, some products lock monitoring into pre-defined port and protocol combinations, rather than Using service/channel identification based on packet content. Even if full application Passive Monitoring Channel identification is included, you want to make sure it’s enabled. Otherwise, you Might miss non-standard communications such as connecting over an unusual port. Most of the network monitors are dedicated general-purpose server hardware with DLP software installed. A few vendors deploy true specialized appliances. While some products have their management, workflow, and built into the network monitor, this is often offloaded to a separate server or appliance. Email Integration The next major component is email integration. Since email is store and forward you can gain a lot of capabilities, including quarantine, encryption integration, and filtering, without the same hurdles to avoid blocking synchronous traffic. Most products embed an MTA (Mail Transport Agent) into the product, allowing you to just add it as another hop in the email chain. Quite a few also integrate with some of the major existing MTAs/email security solutions directly for better performance. One weakness of this approach is it doesn’t give you access to internal email. If you’re on an Exchange server, internal messages never make it through the external MTA since there’s no reason to send that traffic out. To monitor internal mail you’ll need direct Exchange/ Lotus integration, which is surprisingly rare in the market. Full integration is different from just Scanning logs/libraries after the fact, which is what some companies call internal mail support. Good email integration is absolutely critical if you ever want to do any filtering, as opposed to just monitoring. MTA DLP Email Server Email Filtering/Blocking and Proxy Integration Nearly anyone deploying a DLP solution will eventually want to start blocking traffic. There’s only so long you can take watching all your juicy sensitive data running to the nether regions of the Internet before you start taking some action. But blocking isn’t the easiest thing in the world, especially since we’re trying to allow good traffic, only block bad traffic, and make the decision using real-time content analysis. Email, as we just mentioned, is fairly straightforward to filter. It’s not quite real-time and is proxies by its very nature. Adding one more analysis hop is a manageable problem in even the most complex environments. Outside of email most of our communications traffic is synchronous — everything runs in real time. Thus if we want to filter it we either need to bridge the traffic, proxy it, or poison it from the outside. Bridge With a bridge we just have a system with two network cards which performs content analysis in the middle. If we see something bad, the bridge breaks the connection for that session. Bridging isn’t the best approach for DLP since it might not stop all the bad traffic before it leaks out. It’s like sitting in a doorway watching everything go past with a magnifying glass; by the time you get enough traffic to make an intelligent decision, you may have missed the really good stuff. Very few products take this approach, although it does have the advantage of being protocol agnostic. Proxy In simplified terms, a proxy is protocol/application specific and queues up traffic before passing it on, allowing for deeper analysis. We see gateway proxies mostly for HTTP, FTP, and IM protocols. Few DLP solutions include their own proxies; they tend to integrate with existing gateway/proxy vendors since most customers prefer integration with these existing tools. Integration for web gateways is typically through the iCAP protocol, allowing the proxy to grab the traffic, send it to the DLP product for analysis, and cut communications if there’s a violation. This means you don’t have to add another piece of hardware in front of your network traffic and the DLP vendors can avoid the Proxy difficulties of building dedicated network hardware for inline analysis. If the gateway includes a reverse SSL proxy you can also sniff SSL connections. You will need to make changes on your endpoints to deal with all the certificate alerts, but you can now peer DLP into encrypted traffic. For Instant Messaging you’ll need an IM proxy and a DLP product that specifically supports whatever IM protocol you’re using. TCP Poisoning The last method of filtering is TCP poisoning. You monitor the traffic and when you see Proxy/Gateway something bad, you inject a TCP reset packet to kill the connection. This works on every TCP protocol but isn’t very efficient. For one thing, some protocols will keep trying to get the traffic through. If you TCP poison a single email message, the server will keep trying to send it for 3 days, as often as every 15 minutes. The other problem is the same as bridging — since you don’t queue the traffic at all, by the time you notice something bad it might be too late. It’s a good stop-gap to cover nonstandard protocols, but you’ll want to proxy as much as possible. Internal Networks Although technically capable of monitoring internal networks, DLP is rarely used on internal traffic other than email. Gateways provide convenient choke points; internal monitoring is a daunting prospect from cost, performance, and policy management/false positive standpoints. A few DLP vendors have partnerships for internal monitoring but this is a lower priority feature for most organizations. Distributed and Hierarchical Deployments All medium to large enterprises, and many smaller organizations, have multiple locations and web gateways. A DLP solution should support multiple monitoring points, including a mix of passive network monitoring, proxy points, email servers, and remote locations. While processing/analysis can be offloaded to remote enforcement points, they should 13 send all events back to a central management server for workflow, , investigations, and archiving. Remote offices are usually easy to support since you can just push policies down and back, but not every product has this capability. The more advanced products support hierarchical deployments for organizations that want to manage DLP differently in multiple geographic locations, or by business unit. International companies often need this to meet legal monitoring requirements which vary by country. Hierarchical management supports coordinated local policies and enforcement in different regions, running on their own management servers, Communicating back to a central management server. Early products only supported one management server but now we have options to deal with these distributed situations, with a mix of corporate/regional/business unit policies, , and workflow. Data at Rest While catching leaks on the network is fairly powerful, it’s only one small part of the problem. Many customers are finding that it’s just as valuable, if not more valuable, to figure out where all that data is stored in the first place. We call this content discovery. Enterprise search tools might be able to help with this, but they really aren’t tuned well for this specific problem. Enterprise data classification tools can also help, but based on discussions with a number of clients they don’t seem to work well for finding specific policy violations. Thus we see many clients opting to use the content discovery features of their DLP products. The biggest advantage of content discovery in a DLP tool is that it allows you to take a single policy and apply it across data no matter where it’s stored, how it’s shared, or how it’s used. For example, you can define a policy that requires credit card numbers to only be emailed when encrypted, never be shared via HTTP or HTTPS, only be stored on approved servers, and only be stored on workstations/laptops by employees on the accounting team. All of this can be specified in a single policy on the DLP management server. Content discovery consists of three components: 1. Endpoint Discovery: scanning workstations and laptops for content. 2. Storage Discovery: scanning mass storage, including file servers, SAN, and NAS. 3. Server Discovery: application-specific scanning of stored data on email servers, document management systems, and databases (not currently a feature of most DLP products, but beginning to appear in some Database Activity Monitoring products). Content Discovery Techniques There are three basic techniques for content discovery: 1. Remote Scanning: a connection is made to the server or device using a file sharing or application protocol, and scanning is performed remotely. This is essentially mounting a remote drive and scanning it from a server that takes policies from, and sends results to, the central policy server. For some vendors this is an appliance, for others it’s a commodity server, and for smaller deployments it’s integrated into the central management server. 14 2. Agent-Based Scanning: an agent is installed on the system (server) to be scanned and scanning is performed locally. Agents are platform specific, and use local CPU cycles, but can potentially perform significantly faster than remote scanning, especially for large repositories. For endpoints, this should be a feature of the same agent used for enforcing Data In Use controls. 3. Memory-Resident Agent Scanning: Rather than deploying a full-time agent, a memory-resident agent is installed, performs a scan, then exits without leaving anything running or stored on the local system. This offers the performance of agent-based scanning in situations where you don’t want an agent running all the time. Any of these technologies can work for any of the modes, and enterprises will typically deploy a mix depending on policy and infrastructure requirements. We currently see technology limitations with each approach which guide deployment:Remote scanning can significantly increase network traffic and has performance limitations based on network bandwidth and target and scanner network performance. Some solutions can only scan gigabytes per day (sometimes hundreds, but not terabytes per day), per server based on these practical limitations, which may be inadequate for very large storage. Agents, temporal or permanent, are limited by processing power and memory on the target system, which often translates to restrictions on the number of policies that can be enforced, and the types of content analysis that can be used. For example, most endpoint agents are not capable of partial document matching or database fingerprinting against large data sets. This is especially true of endpoint agents which are more limited. Agents don’t support all platforms. Data at Rest Enforcement Once a policy violation is discovered, the DLP tool can take a variety of actions: Alert/Report: create an incident in the central management server just like a network violation. Warn: notify the user via email that they may be in violation of policy. Quarantine/Notify: move the file to the central management server and leave a text file with instructions on how to request recovery of the file. Quarantine/Encrypt: encrypt the file in place, usually leaving a plain text file describing how to request decryption. Quarantine/Access Control: change access controls to restrict access to the file. Remove/Delete: either transfer the file to the central server without notification, or just delete it. The combination of different deployment architectures, discovery techniques, and enforcement options creates a powerful combination for protecting data at rest and supporting compliance initiatives. For example, we’re starting to see increasing deployments of CMF to support PCI compliance — more for the ability to ensure (and report) that no cardholder data is stored in violation of PCI than to protect email or web traffic. Data in Use DLP usually starts on the network because that’s the most cost-effective way to get the broadest coverage. Network monitoring is non-intrusive (unless you have to crack SSL) and offers visibility to any system on the network, managed or unmanaged, server or workstation. Filtering is more difficult, but again still relatively straightforward on the network (especially for email) and covers all systems connected to the network. But it’s clear this isn’t a complete solution; it doesn’t protect data when someone walks out the door with a laptop, and can’t even prevent people from copying data to portable storage like USB drives. To move from a “leak prevention” solution to a “content protection” solution, products need to expand not only to stored data, but to the endpoints where data is used. 15 Note: Although there have been large advancements in endpoint DLP, endpoint-only solutions are not recommended for most users. As we’ll discuss, they normally require compromise on the number and types of policies that can be enforced, offer limited email integration, and offer no protection for unmanaged systems. Long-term, you’ll need both network and endpoint capabilities, and most of the leading network solutions are adding or already offer at least some endpoint protection. Adding an endpoint agent to a DLP solution not only gives you the ability to discover stored content, but to potentially protect systems no longer on the network or even protect data as it’s being actively used. While extremely powerful, it has been problematic to implement. Agents need to perform within the resource constraints of a standard laptop while maintaining content awareness. This can be difficult if you have large policies such as, “protect all 10 million credit card numbers from our database”, as opposed to something simpler like, “protect any credit card number” that will generate false positives every time an employee visits Amazon.com. |
Question |
What are the methods used for protecting the data? |
Answer |
1. Save as you work. You should always save your work as you go and learn how to use the ‘auto-save’ features in your application. 2. Make a backup. Before you make changes to critical data always make a duplicate. Even if you just made a backup yesterday – make another. 3. Keep a copy of your data offsite. Diligently backing up your data is good practice but keep a copy of your data offsite. If there were a fire or other disaster your onsite data backup could be lost as well. 4. Refresh your archives. Years ago you archived your data to a zip drive. Now you decide to use that data as a baseline – are you sure there is still a zip drive that can read your data? As technology changes, it is a good idea to transfer your data to a current data storage standard so that you aren’t stuck with irretrievable data. Information Systems & Technology (IST) provides a backup service. 5. Never open email attachments by habit. If your email reader has an option to automatically open attachments you should disable that feature. Always run any attachments and downloaded files through a virus scanner first. 6. Never trust disks from other people. Anytime you receive a file on any type of media check it first for viruses! 7. Update your software. Make sure you have the latest updates for your software – especially for your virus checking software. Make it a habit to regularly check for updates and enable automatic updates for software that offers that feature. 8. Protect your passwords. Your userid is your identity. The key to your identity is your password. Anytime your account accesses the network you are responsible for any activity from that account (see Guidelines on use of Waterloo Computing and Network Resources). Remember to change your password on a regular basis. 9. Protect your computer. Use a secure operating system which requires users to be ‘authenticated’. As an added benefit these operating systems also restrict what individual users can see and do on the system. 10. Perform regular maintenance. Learn how to use the utilities that diagnose your system for problems. It is a good idea to run a disk-scanning program, defragment your hard drive, or whatever else your system might need. These utilities can prevent little problems from becoming big problems, and will keep your system running at top speed. If you need help with a big problem IST has a hardware repair service. |
Question |
What are the platforms and application supported? |
Answer |
It supports Microsoft and Linux platforms. |
Question |
How to ensure that there is no data loss? |
Answer |
This is all depends upon the implementation of DLP into your environment. While implementing DLP ,we need to configured and apply the Policies, if any incident triggered , notification goes to policy manager and as per the policy action has been taken. This will be quarantine into DLP DB. |
Question |
How to unintall the package ? |
Answer |
In the Systems Management Server console, right‑click Packages and select New | Package. 2 In the General tab, type the Package Name (required), and the Version, Publisher and Language (optional). 3 In the Data Source tab, select This Package Contains Source Files, then click Set. 4 In the Set Source Directory window under Source Directory Location, select the type of connection to the set‑up files in the source directory. Type the source directory path in the field and click OK. 5 In the Distribution Settings tab, select High from the Sending Priority drop‑down list, and click OK. The package appears under the Packages node of the site tree. 6 Expand the new package under the Packages node. |
Question |
Sometimes we do get errors browsing SSL websites even if added in exception, how to fine tune those ? |
Answer |
This is not related to DLP, Please check this with SSL Certificate. |
Question |
what is the RPO and RTO for data assured from this product? |
Answer |
We match the requirements from business needs and plans, on that basis the desired recovery point objective (RPO) and recovery time objective (RTO) will be decided |
Question |
can u pls give scenario where DLP can be used? (please give practical scenario) |
Answer |
We can use the DLP into Banking section where user database maintains. DLP is business requirment not IT requirement. almost all the banks are using the DLP solution. now a days non IT Business are also implement DLP solution |
Question |
How about the storage supportability? |
Answer |
you can use any type of storage soltions that includes NAS, SAN and DAS. |
Question |
What type of policy can we apply in banking sector? |
Answer |
This will be depends all what kind of data and action would like to take you on that data. We can Monitor the Email, USB, Printer ,DB. There are 4 type of policies 1) permit 2) Permit + notification 3) Deny 4) Deny + notification |
Question |
Brief about the upgradation,is there any pre-requsition needs to be followed apart from H/W & S/W compatability? |
Answer |
1) Need to take backup before upgrade 2) Take policy backup before upgrade 3) Please check Release notes of new version 4) Please check the compatibility of new version |
Question |
Any customized policy will be configured in websense dlp ? |
Answer |
yes, we can configured the customized policy. you have to apply package for applying the same. |
Question |
What is the pre-requistion for implementing DLP? |
Answer |
1) Managment Server 2) Connectivity between End point and Managment server 3) Policy server 4) Device Lisencse |
Question |
We are using DLP version 11.0 Kindly explain how to apply the policy.. |
Answer |
We have a client, who want to port (not upgrade) the DLP Enforce Sever 10.5 (Running over Windows 2003 32 bit machine) and its Oracle 10 g database (Two Tier implementation). As the Windows 2003 and Oracle 10 g is the not standard platform, the IT operations wants to build a new Enforce Server 11.5 over Red Hat Linux 64 bit machine. As this will be a new Enforce server 11.5 (11.0->11.1->11.5), how would one migrate the policies and old incident logs from the old Oracle 10 g database. When you upgrade the DLP Enforce Server , the Symantec upgrade wizard handles the migration of the polices of other contents (Step 1) and migrates the incident log from Oracle 10.0 g to 11.g when you upgrade the Oracle Database (Step 2) . But our circumstances are different. We have to install a fresh copy of the DLP enforce Server 11.0 over a new hardware that support 64 bit Red Hat Linux and a new Oracle 11.0 g Oracle Server also on a new hardware hat support 64 bit Red Hat Linux. |