Risk mitigation techniques pdf




















All businesses face risk. Along with their day-to-day operations, the company can set themselves apart from their competition based on their ability to manage and deal with risk.

Risk mitigation strategies refer to the different methods of dealing with business risk. Risk Mitigation Strategies. Risk Evaluation. How to Determine Risk Mitigation Plans. Trends with Risk Mitigation. The Use of Analytics Tools. The Bottom Line. There are five principal risk mitigation strategies.

Of course, each one serves a different purpose for different businesses. It becomes a subjective matter to decide how to approach risk. However, with the use of risk management software and risk assessment matrices, you can be better prepared to assess, monitor and manage risk.

Instead, it understands the probability of it happening and accepting the consequences that may occur. This is the best strategy when risk is small or unlikely to happen. It makes sense to adopt risk when the cost of mitigating or avoiding it will be higher than merely accepting it and leaving it to chance. Risk Avoidance: If a risk from starting a project, launching a product, moving your business, etc. In this case, risk avoidance means not performing that activity that causes the risk.

Managing risk in this way is most like how people address personal risks. While some people are more risk-loving and others are more risk-averse, everyone has a tipping point at which things become just too risky and not worth attempting.

Risk Mitigation : When risks are evaluated, some risks are better not to avoid or accept. In this instance, risk mitigation is explored. Risk mitigation refers to the processes and methods of controlling risk. All this information can be saved for future reference or for comparison purposes. The client side is installed on individual computers in the system whereas the server part is installed for the administrator of the system. Each client communicates with the server to get the parameter values provided by the administrator.

Third party softwares must also be installed so that the client program can extract the services currently running on a particular computer. The client side then sends the scores for individual components to the server where all the scores are combined to provide a unified score for the whole network system.

We deployed our tool in two of our computer systems in The University of Texas at Dallas Database and Data Mining Laboratory to perform a comparative analysis of the risk for both the machines. Since both the machines are in the same network and under the same firewall rules, the AP and SPR values have been ignored i. AOL Active Security Monitor ASM provides a score based on seven factors, which include factors like firewall, virus protection, spyware protection and p2p software.

Nessus on the other hand, scans for open ports and how those ports can be used to compromise the system. Based on this, Nessus provides a report warning the user of the potential threats. In Table 1, the comparison is provided. As can be seen from the table, System B is more vulnerable than System A. The same trend can be seen using the other two tools as well.

We also performed comparisons between systems where both had the same service and softwares, but one was updated and other was not. Since, new vulnerabilities are found frequently, for the same state i. But in such a case, both Nessus and AOL Active Security Monitor would provide the same scores, as their criteria of security measurement would remain unchanged. So far our knowledge, there is no existing tool that can perform such dynamic risk measurement and therefore, we suffice by providing comparison with these two tools.

Here the scores themselves may not provide much information about the risk but can provide a comparison over time regarding the state of security of the system. This allows effective monitoring of the risk towards the system. The component values assigned by this equation will be monotonically decreasing functions of the components of the Total Vulnerability Measure of the system. The parameters c1, c2, c3, c4 and c5 provide control over how fast the components of the Quality of Protection Metric QoPM decreases with the risk factors.

The QoPM can be converted from a vector value to a scalar value by a suitable transformation, like taking the norm or using weighted averaging. Intuitively, one way to combine these factors is to choose the maximum risk factor as the dominating one e.

Although we advocate generating a combined metric, we also believe that this combination framework should be customizable to accommodate user preferences e. Another important aspect of this score is that, for the vulnerability measures i.

In case of QoPM, higher score indicates higher level of security or lower risk towards the system. In contrast to other similar research works that present studies on a few specific systems and products, we experimented using publicly available vulnerability databases. We evaluated and tested our metric, both component-wise and as a whole, on a large number of services and randomly generated policies. In our evaluation process, we divided the data into training sets and test sets.

In the following sections, we describe our experiments and present their results. The NVD provides a rich array of information that makes it the vulnerability database of choice. For each vulnerability, the NVD provides the products and versions affected, descriptions, impacts, cross-references, solutions, loss types, vul- nerability types, the severity class and score, etc.

The NVD severity score has a range of 0, We present some summary sta- tistics about the NVD database snapshot that we used in our experiments in Table 2. The severity score is calculated using the Common Vulnerability Scoring System CVSS , which provides a base score depending on several factors like impact, access complexity, required authentication level, etc. We varied b so that the decay function falls to 0. Here, we first chose services with at least 10 vulnerabilities in their lifetimes, and gradually increased this lower limit and observed that the accuracy increases with the lower limit.

As expected of a historical measure, better results have been found when more history is available for the services and observed the maximum accuracy of The graph in Fig. First, we conducted experiments to evaluate the different ways of calculating the probability in 3. We conducted experiments to compare the accuracies obtained by the exponential distribution, empirical distribution and time series analysis method.

Here, we obtained the most accurate and stable results using Exponential CDF. The data used in the experiment for Exponential CDF was the interarrival times for the vulnerability exposures for the services in the database. We varied the length of the training data and the test data set.

We only considered those services that have at least 10 distinct vulnerability release dates in the 48 months training period. For Expected Severity, we used similar approach. For evaluating Expected Risk ER , we combined the data sets for the probability calculation methods and the data sets of the expected severity. In the experiment for Exponential CDF, we constructed an exponential distribution for the interarrival time data and computed 3 using the formula in 7.

For each training set, we varied the value of T , and ran validation for each value of T with the test data set. In Fig. For the test data set size of 12 months, we observed the highest accuracy of We present the results of the experiment for Expected Severity in Fig. The maximum accuracy obtained in this experiment was The results of the Expected Risk experiment are presented in the Fig.

For the Expected Risk, we observed the best accuracy of It can be observed in Fig. This implies that this method is not sensitive to the volume of training data available to it. From Fig. It increases quite sharply with decreasing values of training data set data size. This means that the expectation calculated from the most recent data is actually the best model for the expected severity in the test data.

In absence of other comparable measures of system security, we used the following hypothesis — if system A has a better QoPM than system B based on training period data, then system A will have less number of vulnerabilities than system B in the test period.

We assume that the EVM component of the measure will be 0 as any existing vulnerability can be removed.

In the experiment, we generated a set of random policies and for each policy we evaluated In our experiment, we varied the number of policies from 50 to in steps of In generating the policies, we varied the number of services per system from 2 to 20 in increments of 2. We present the results obtained by the experiment in Fig.

As mentioned previously, a policy can be regarded as a set of rules indicating which services are allowed access to the network traffic. We set up different service combinations and consider them as separate policies for our experiments. We can observe from the graph in Fig. However, the accuracy does vary with the number of services per policy — the accuracy decreases with increasing number of services per policy.

This trend is more clearly illustrated in Fig. The machine used to run the simulation had a 1. The results are shown in Fig. The running time was then calculated for several hosts within each network. The average value of the running time per host was then calculated and used in Fig. The highest running time per host for a network of nodes was very reasonable at less than 5 seconds. Thus, the algorithm scales gracefully in practice and is thus feasible to run for a wide range of network sizes.

Keeping this in mind, many organizational standards have been evolved to evaluate the security of an organization. Details regarding the methodology can be found in [10]. In [11] NIST provides a guidance to measure and strengthen the security through the development and use of metrics, but their guideline is focused on the individual organizations and they do not provide any general scheme for quality evaluation of a policy.

There are some professional organizations as well as vulnerability assessment tools including Nessus, NRAT, Retina, Bastille and others [12]. They actually try to find out vulnerabilities from the configuration information of the concerned network.

However, all these approaches usually provide a report telling what should be done to keep the organization secure and they do not consider the vulnerability history of the deployed services or policy structure. There has been a lot of research in the security policy evaluation and verification as well. Attack graphs is another technique that is well developed to assess the risks associated with network exploits. The implementations normally require intimate knowledge of the steps of attacks to be analyzed for every host in the network [14, 15].

In [16] the authors, however, provide a way to do so even when the information is not complete. Still, this setup causes the modeling and analysis using this model to be highly complex and costly. Mehta et al. But their work do not give any sort of prediction of future risks associated with the system and also, they do not consider the policy resistance of firewall and IDS.

There has been some research focusing on the attack surface of a network. Mandhata et al. Another work based on attack surface has been done by Howard et al. Atzeni et al. In [22] Pamula propose a security metric based on the weakest adversary i. In Alhazmi et al. Our work is more general in this respect and utilizes publicly available data. There has also been some research work that focus on hardening the network. Wang et al.

They also attempt to predict future alerts in multistep attacks using attack graph [25]. A previous work of hardening the network was done by Noel et al. They use the graphs to find some initial conditions that, when disabled, will achieve the purpose of hardening the network. Sahinoglu et al. But all these work do not represent the total picture as they predominantly try to find existing risk and do not address how risky the system will be in the near future or how policy structure would impact on security.

Their analysis regarding security policies can not be regarded as complete and they lack the flexibility in evaluating them. A preliminary investigation of measuring the existing vulnerability and some historical trends have been analyzed in a previous work [29]. That work was still limited in analysis and scope.

In this paper, we present a proactive approach to quantitatively evaluate security of network systems by identifying, formulating and validating several important factors that greatly affect its security. Our experiments validate our hypothesis that if a service has a highly vulnerability prone history, then there is higher probability that the service will become vulnerable again in the near future.

These metrics also indicate how the internal firewall policies affect the security of the network as a whole. Our experiments provide very promising results regarding our metric. The accuracies obtained in these experiments vindicate our claims about the components of our metric and also the metric as a whole.

Combining all the measures into a single metric and performing experiments based on this single metric also provides us with an idea of its effectiveness.

A little forethought and work enable more options than just a major product recall or bankruptcy filing. Engineers and managers throughout the organization make decisions concerning risks every day.

Providing a set of clear strategies along with guidance allows the entire organization to appropriately mitigate risks on a daily basis. In the meantime, please check out the FMEA resources page for more about one of the basic tools to identify product, process, or system risks. May I know the published date of this article so that I can use as a reference in my assignment.

Thanks in advance.. Anyway, thanks for the request. From your experience, which one out of four risk mitigation strategy would you consider the best? I would always look for a way to avoid the risk when possible — Avoidance. Design out the possibility of the risk occurring or imposing unwanted consequences.

Most likely there are — these are the ones I am familiar with. If anyone knows of other strategies, please chime in. These are other risk that I am aware about. If you find content for any of them please email me the examples, i would be highly obliged Dinesh dv. The content is very useful and easy to understand on risk mitigation. Thanks for the content. Your email address will not be published. Home About Contributors Reliability. Comments Please send tutorials for risk mitigation.

Cheers, Fred. Hi Israel, I would always look for a way to avoid the risk when possible — Avoidance. A risk avoided is a risk conquered. Are there some more risks mitigation strategies in addition to the four strategies above? Other mitigation risks are hedge, risk buffer, share, find alternatives, DD etc..



0コメント

  • 1000 / 1000