A Java implementation of CSRF mitigation using “double submit cookie” pattern

Goal of this articlecsrf

The goal of this article is to present an implementation of the “double submit cookie” pattern used to mitigate the Cross Site Request Forgery (CSRF) attacks. The proposed implementation is a Java filter plus a few auxiliary classes and it is (obviously) suitable for projects using the Java language as back-end technology.

Definition of CSRF and possible mitigations

In the case of a CSRF attack, the browser is tricked into making unauthorized requests on the victim’s behalf, without the victim’s knowledge. The general attack scenario contains the following steps:

  1. the victim connects to the vulnerable web-site, so it have a real, authenticated session.
  2. the hacker force the victim (usually using a spam/fishing email) to navigate to another (evil) web-site containing the CSRF attack.
  3. when the victim browser execute the (evil) web-site page, the browser will execute a (fraudulent) request to the vulnerable web-site using the user authenticated session. The user is not aware at all of the fact that navigating on the (evil) web-site will trigger an action on the vulnerable web-site.

For deeper explanations I strongly recommend  to read chapter 5 of Iron-Clad Java: Building Secure Applications book and/or the OWASP Cross-Site Request Forgery (CSRF) Prevention Cheat Sheet.

Definition of “double submit cookie” pattern

When a user authenticates to a site, the site should generate a (cryptographically strong) pseudo-random value and set it as a cookie on the user’s machine separate from the session id. The server does not have to save this value in any way, that’s why this patterns is sometimes also called Stateless CSRF Defense.

The site then requires that every transaction request include this random value as a hidden form value (or other request parameter). A cross origin attacker cannot read any data sent from the server or modify cookie values, per the same-origin policy.

In the case of this mitigation technique the job of the client is very simple; just retrieve the CSRF cookie from the response and add it into a special header to all the requests:

csrfclient
Client workflow

The job of the server is a little more complex; create the CSRF cookie and for each request asking for a protected resource, check that the CSRF cookie and the CSRF header of the request are matching:

csrfserver
Server workflow

Note that some JavaScript frameworks like AngularJS implements the client worflow out of the box; see Cross Site Request Forgery (XSRF) Protection

Java implementation of “double submit cookie” pattern

The proposed implementation is on the form of a (Java) Servlet filter and can be found here: GenericCSRFFilter GitHub.

In order to use the filter, you must define it into you web.xml file:

<filter>
   <filter-name>CSRFFilter</filter-name>
   <filter-class>com.github.adriancitu.csrf.GenericCSRFStatelessFilter</filter-class>
<filter>

<filter-mapping>
   <filter-name>CSRFFilter</filter-name>
   <url-pattern>/*</url-pattern>
</filter-mapping>

 

The filter can have 2 optional initialization parameters: csrfCookieName representing the name of the cookie that will store the CSRF token and csrfHeaderName representing the name of the HTTP header that will be also contains the CSRF token.

The default values for these parameters are “XSRF-TOKEN” for the csrfCookieName and “X-XSRF-TOKEN” for the csrhHeaderName, both of them being the default values that AngularJS is expecting to have in order to implement the CSRF protection.

By default the filter have the following features:

  • works with AngularJS.
  • the CSRF token will be a random UUID.
  • all the resources that are NOT accessed through a GET request method will be CSRF protected.
  • the CSRF cookie is replaced after each non GET request method.

How it’s working under the hood

The most of the functionality is in the GenericCSRFStatelessFilter#doFilter method; here is the sequence diagram that explains what’s happening in this method:

doFilter method sequence diagram
doFilter method sequence diagram

The doFilter method is executed on each HTTP request:

  1. The filter creates an instance of ExecutionContext class; this class is a simple POJO containing the initial HTTP request, the HTTP response, the CSRF cookies (if more than one cookie with the csrfCookieName is present) and implementation of the ResourceCheckerHook , TokenBuilderHook and ResponseBuilderHook .(see the next section for the meaning of this classes).
  2. The filter check the status of the HTTP resource, that status can be:MUST_NOT_BE_PROTECTED, MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED,MUST_BE_PROTECTED_AND_COOKIE_ATTACHED (see ResourceStatus enum) using an instance of ResourceCheckerHook.
  3. If the resource status is ResourceStatus#MUST_NOT_BE_PROTECTED
    ResourceStatus#MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED then
    the filter creates a CSRF cookie having as token the token generated by an instance of TokenBuilderHook.
  4. if the resource status ResourceStatus#MUST_BE_PROTECTED_AND_COOKIE_ATTACHED
    then compute the CSRFStatus of the resource and then use an instance of ResponseBuilderHook to return the response to the client.

How to extend the default behavior

It is possible to extend or overwrite the default behavior by implementing the hooks interfaces. All the hooks implementations must be thread safe.

  1. The ResourceCheckerHook is used to check the status of a requested resource. The default implementation is DefaultResourceCheckerHookImpl and it will return ResourceStatus#MUST_NOT_BE_PROTECTED for any HTTP GET method, for all the other request types, it will return {@link ResourceStatus#MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED if any CSRF cookie is present in the query or ResourceStatus#MUST_BE_PROTECTED_BUT_NO_COOKIE_ATTACHED otherwise.The interface signature is the following one:
    public interface ResourceCheckerHook extends Closeable {
        ResourceStatus checkResourceStatus(ExecutionContext executionContext);
    }  
  2. The TokenBuilderHook hook is used to generate the token that will be used to create the CSRF cookie. The default implementation  is DefaultTokenBuilderHookImpl and it uses a call to UUID.randomUUID to fetch a token. The interface signature is the following one:
    public interface TokenBuilderHook extends Closeable {
        String buildToken(ExecutionContext executionContext);
    }
  3. The ResponseBuilderHook is used to generate the response to the client depending of the CSRFStatus of the resource. The default implementation is DefaultResponseBuilderHookImpl and it throws a SecurityException if the CSRF status is CSRFStatus#COOKIE_NOT_PRESENT, CSRFStatus#HEADER_TOKEN_NOT_PRESENT or CSRFStatus#COOKIE_TOKEN_AND_HEADER_TOKEN_MISMATCH. If the CSRF status is CSRFStatus#COOKIE_TOKEN_AND_HEADER_TOKEN_MATCH then the old CSRF cookies are deleted and a new CSRF cookie is created. The interface signature is the following one:
    public interface ResponseBuilderHook extends Closeable {
        ServletResponse buildResponse(ExecutionContext executionContext,
                                      CSRFStatus status);
    }
    

The hooks are instantiated inside the GenericCSRFStatelessFilter#init method using the ServiceLoader Java 6 loading facility. So if you want to use your implementation of one of the hooks then you have to create a  META-INF/services directory that contains a text file whose name matches the fully-qualified interface class name of the hook that you want to replace.

Here is the sequence diagram representing the hooks initializations:

initmethod

ElasticSearch:How to enable scripting on the embedded server

The goalelastic

In this ticket I will present all the required steps in order to enable the scripting on an embedded elasticsearch server. I will not explain how to create and use an embedded server but if you need more infos you can read this ticket Embedded Elasticsearch Server for Tests.

If you execute a script within the embedded server the following exception will be thrown;

Caused by: java.lang.IllegalArgumentException: script_lang not supported [xxxxx]
    at  org.elasticsearch.script.ScriptService.getScriptEngineServiceForLang(ScriptService.java:211)

 Code changes

 The easiest way to create an embedded server is the following one:

 Node node = NodeBuilder.nodeBuilder()
              .local (true)
              .settings(
               //add settings here
               )
              .node ();

The problem with this approach is that the NodeBuilder.node() method it will create a node class with an empty list of plug-ins; see the NodeBuilder.node code:

   /*
    * Builds the node without starting it.
    */
    public Node build() {
        return new Node(settings.build());
    }
    //Node(Settings) constructor
     /**
     * Constructs a node with the given settings.
     *
     * @param preparedSettings Base settings to configure the node with
     */
     public Node(Settings preparedSettings) {
        this(InternalSettingsPreparer
                .prepareEnvironment(preparedSettings, null),
             Version.CURRENT,
             Collections.<Class<? extends Plugin>>emptyList());
     }

In order to create a node having a list of desired plug-ins the second constructor of the Node class should be used:

protected Node(
                Environment tmpEnv, 
                Version version, 
                Collection<Class<? extends Plugin>> classpathPlugins)

The problem with this (second) constructor is that it is protected so we will need to extend the Node class to be able to use the protected constructor and we will need to extend also the NodeBuilder class in order to use the new Node class.

And here is the code:

  //PluginNode class 
  private static class PluginNode extends Node {
      public PluginNode(Settings preparedSettings, List<Class<? extends Plugin>> plugins) {
         super(InternalSettingsPreparer.prepareEnvironment(preparedSettings, null), Version.CURRENT, plugins);
      }
  }

  //PluginNodeBuilder class
  public class PluginNodeBuilder extends NodeBuilder {
        private List<Class<? extends Plugin>> plugins = new ArrayList<>();
        
        public PluginNodeBuilder() {
            super();
        }
        //new method to add plug-ins       
        public PluginNodeBuilder addPlugin(Class<? extends Plugin> plugin) {
            plugins.add(plugin);
            return this;
        }

        @Override
        //use the new PluginNode to build a node 
        public Node build() {
            return new PluginNode(settings().build(), plugins);
        }
    };

And how to use the new node builder:

 Node node = new PluginNodeBuilder()
                .addPlugin(ExpressionPlugin.class)
                .addPlugin(GroovyPlugin.class)
                .local (true)
                .settings(
                 //add settings here
                 )
                .node ();

Enable scripting feature

Using the previous Java code it will not be sufficient to execute the scripts; you must also use the right settings at the server creation and add the (Maven) dependencies to your classpath.

Server settings

 Node node = new PluginNodeBuilder()
                .addPlugin(ExpressionPlugin.class)
                .addPlugin(GroovyPlugin.class)
                .local (true)
                .settings (Settings.settingsBuilder ()
                     ...
                     //this is common for all languages
                     .put ("script.inline", true)
                     //this are the settings for the groovy language
                     .put ("script.engine.groovy.inline.mapping", true)
                     .put ("script.engine.groovy.inline.search", true)
                     .put ("script.engine.groovy.inline.update", true)
                     .put ("script.engine.groovy.inline.plugin", true)
                     .put ("script.engine.groovy.inline.aggs", true)
                      
                     //this are the settings for the expression language
                     .put ("script.engine.expression.inline.mapping", true)
                     .put ("script.engine.expression.inline.search", true)
                     .put ("script.engine.expression.inline.update", true)
                     .put ("script.engine.expression.inline.plugin", true)
                     .put ("script.engine.expression.inline.aggs", true)

If you more details about all the possible settings options you can look to the ElasticSearch Scripting documentation.

Maven dependencies

For the expression language:

       <dependency>
            <groupId>org.apache.lucene</groupId>
            <artifactId>lucene-expressions</artifactId>
            <version>5.5.2</version>
            <scope>compile</scope>
        </dependency>

       <dependency>
            <groupId>org.elasticsearch.module</groupId>
            <artifactId>lang-expression</artifactId>
            <version>2.2.0</version>
            <scope>compile</scope>
        </dependency>

For the groovy language:

<dependency>
    <groupId>org.elasticsearch.module</groupId>
    <artifactId>lang-groovy</artifactId>
    <version>2.3.2</version>
</dependency>

Book review: Iron-Clad Java Building Secure Web Applications

This a review of the Iron-Clad Java: Building Secure Web Applications book.

(My) Conclusion

I will start with the conclusion because it’s maybe the most important part of this review.

For me this is a must read book if you want to write more robust (web and non web) applications in Java, it covers a very large panel of topics from the basics of securing a web application using HTTP/S response headers to handling the encryption of sensitive informations in the right way.

Chapter 1: Web Application Security BasicsironCladJavaBook

This chapter is an introduction to the security of web application and it can be split in 2 different types of items.

The first type of items is what I would call “low-hanging fruits” or how you could improve the security  of your application with a very small effort:

  • The use of HTTP/S POST request method is advised over the use of HTTP/S GET. In the case of POST the parameters are embedded in the request body, so the parameters are not stored and visible in the browser history and if used over HTTPS are opaques to a possible attacker.
  • The use of the HTTP/S response headers:
    • Cache-Control – directive that instructs the browser how should cache the data.
    • Strict-Transport-Security – response header that force the browser to alway use the HTTPS protocol. This it can protect against protocol downgrade attacks and cookie hijacking.
    • X-Frame-Options – response header that can be used to indicate whether or not a browser should be allowed to render a page in a <frame>, <iframe> or <object> . Sites can use this to avoid clickjacking attacks, by ensuring that their content is not embedded into other sites.
    • X-XSS-Protection – response header that can help stop some XSS atacks (this is implemented and recognized only by Microsoft IE products).

 The second types of items are more complex topics like the input validation and security controls. For this items the authors just scratch the surface because all of this items will be treated in more details in the future chapters of the book.

Chapter 2: Authentication and Session Management

This chapter is about how a secure authentication feature should work; under the authentication topic is included the login process, session management, password storage and the identity federation.

The first part is presenting the general workflow of login and session management (see next picture) and for every step of the workflow  some dos and don’t are described.

login and session management workflow
login and session management workflow

The second part of the chapter is about common attacks on the authentication and for each kind of attack a solution to mitigated is also presented. This part of the chapter is strongly inspired from the OWASP Session Management Cheat Sheet which is rather normal because one of the authors (Jim Manico) is the project manager of the OWASP Cheat Sheet Series.

If you want to have a quick view of this chapter you can take a look to the presentation Authentication and Session Management done by Jim.

Even if you are not implementing an authentication framework for you application, you could still find good advices that can be applied to other web applications; like the use of the use of the secured and http-only attributes for cookies and the increase of the session ID length.

Chapter 3: Access Control

The chapter is about the advantages and pitfalls of implementing an authorization framework and can be split in three parts.

The first part describes the goal of an authorization framework and defines some core terms:

  • subject : the service making the request
  • subject attributes : the attributes that defines the service making the request.
  • group : basic organizational structure
  • role : a functional abstraction that uniquely describe system collaborators with similar or unique duties.
  • object : data being operating on.
  • object attributes : the attributes that defines the type of object being operating on.
  • access control rules : decision that need to be made to determine if a subject is allowed to access an object.
  • policy enforcement point : the place in code where the access control check is made.
  • policy decision point : the engine that takes the subject, subject attributes, object, object attributes and evaluates them to make an access control decision.
  • policy administration point : administrative entry in the access control system.

The second part of the chapter describes some access control (positive) patterns and anti-patterns.

Some of the (positive) access control patterns: have a centralized policy enforcement point  and policy decision point (not spread through the entire code),  all the authorization decisions should be taken on server-side only (never trust the client), the changes in the access control rules should be done dynamically (should not be necessary to recompile or restart/redeploy the application).

For the anti-patterns, some of then are just opposite of the (positive) patterns : hard-coded policy (opposite of “changes in the access control rules should be done dynamically”), adding the access control manually to every endpoint (opposite of have a centralized policy enforcement point  and policy decision point)

Others anti-patterns are around the idea of never trusting the client: do not use request data to make access control policy decisions and fail open (the control access framework should cope with wrong or missing parameters coming from the client).

The third part of the chapter is about different approaches (actually two) to implement an access control framework. The most used approach is RBAC (Role Based Access Control) and is implemented by some well knows Java access control frameworks like Apache Shiro and Spring Security. The most important limitation of RBAC is the difficulty of implementing data-specific/contextual access control. The use of ABAC (Attribute Based Access Control) paradigm can solve the data-specific/contextual access control but there are no mature frameworks on the market implementing this.

Chapter 4: Cross-Site Scripting Defense

This chapter is about the most common vulnerability found across the web and have two parts; the presentation of different types of cross-site scripting (XSS) and the way to defend against it.

XSS is a type of attack that consists in including untrusted data into the victim (web) browser. There are three types of XSS:

  • reflected XSS (non persistent) – the attacker tampers the HTTP request to submit malicious JavaScript code. Reflected attacks are delivered to victims via e-mail messages, or on some other web site. When a user clicks on a malicious link, submitting a specially crafted form the injected code travels to the vulnerable web site, which reflects the attack back to the user’s browser. The browser then executes the code because it came from a “trusted” server.
  • stored XSS (persistent XSS) – the malicious script is stored on the server hosting the vulnerable web application (usually in the database) and it is served later to other users of the web application when the users are loading the vulnerable page. In this case the victim does not require to take any attacker-initiated action.
  • DOM-based XSS – the attack payload is executed as a result of modifying the DOM “environment” in the victim’s browser.

For the defense techniques the big picture is that the input validation and output encoding should fix (almost) all the problems but very often various factors needs to be considered when deciding the defense technique.

Some projects are presented for the input validation (OWASP Java Encoder Project) and output encoding (OWASP HTML Sanitizer, OWSP AntiSamy).

Chapter 5: Cross-Site Request Forgery Defense and Clickjacking

Chapter 6: Protecting Sensitive Data

This chapter is articulated around three topics; how to protect (sensitive) data in transit, how to protect (sensitive) data at rest and the generation of secure random numbers.

How to protect the data in transit

The standard way to protect data in transit is by use of cryptographic protocol Transport Layer Security (TLS). In the case of web applications all the low level details are handled by the web server/application server and by the client browser but if you need a secure communications channel programmatically you can use the Java Secure Sockets Extension (JSSE). The authors recommendations for the cipher suites is to use the JSSE defaults.

Another topic treated by the authors is about the certificate and key management in Java. The notions of trustore and keystore are very well explained and examples about how to use the keytool tool are provided. Last but not least examples about how to manipulate the trustores and keystores programmatically are also provided.

How to protect data at rest

The goal is how to securely store the data but in a reversible way, so the data must be wrapped in protection when is stored and the protection must be unwrapped later when it is used.

For this kind of scenarios, the authors are focusing on Keyczar which is a (open source) framework created by Google Security Team having as goal to make it easier and safer the use cryptography for the developers. The developers should not be able to inadvertently expose key material, use weak key lengths or deprecated algorithms, or improperly use cryptographic modes.

Examples are provided about how to use Keyczar for encryption (symmetric and asymmetric) and for signing purposes.

 Generation of secure random numbers

Last topic of the chapter is about the Java support for the generation of secure random numbers like the optimal way of using the java.security.SecureRandom (knowing that the implementation depends on the underlying platform) and the new cryptographic features of Java8 (enhance of the revocation certificate checking, TLS Server name indication extension).

Chapter 7: SQL Injection and other injection attacks

This chapter is dedicated to the injections attacks; the sql injection being treated in more details that the other types of injection.

SQL injection

The sql injection mechanism and the usual defenses are very well explained. What is interesting is that the authors are proposing solutions to limit the impact of SQL injections when the “classical” solution of query parametrization cannot be applied (in the case of legacy applications for example): the use of input validation, the use of database permissions and the verification of the number of results.

Other types of injections

XML injection, JSON-Based injection and command injection are very briefly presented and the main takeaways are the following ones:

  • use a safe parser (like JSON.parse) when parsing untrusted JSON
  • when received untrusted XML, an XML schema should be applied to ensure proper XML structure.
  • when XML query language (XPath) is intermixed with untrusted data, query parametrization or encoding is necessary.

Chapter 8: Safe File Upload and File I/O

The chapter speaks about how to safety treat files coming from external sources and to protect against attacks like file path injection, null byte injection, quota overloaded Dos.

The main takeaways are the following ones: validate the filenames (reject filenames containing dangerous characters like “/” or “\”), setting a per-user upload quota, save the file to a non-accessible directory, create a filename reference map to link the actual file name to a machine generated name and use this generated file name.

Chapter 9: Logging, Error Handling and Intrusion Detection

Logging

What should be be logged: what happened, who did it, when it happened, what data have been modified, and what should not be logged: sensitive information like sessions IDs, personal informations.

Some logging frameworks for security are presented like OWASP ESAPI Logging and Logback. If you are interested in more details about the security logging you can check OWASP Logging Cheat Sheet.

Error Handling

On the error handling the main idea is to not leak to the external world stacktraces that could give valuable information about your application/infrastructure to an attacker. It is possible to prevent this by registering to the application level static pages for each type of error code or by exception type.

Intrusion Detection

The last part of the chapter is about techniques to help monitor end detect  and defend against different types of attacks. Besides the “craft yourself” solutions, the authors also re presenting the OWASP AppSensor application.

Chapter 10: Secure Software Development Lifecycle

The last chapter is about the SSDLC (Secure Software Development Life Cycle) and how the security could be included in each steps of development cycle. For me this chapter is not the best one but if you are interested about this topic I highly recommend the Software Security: Building Security in book (you can read my own review of the book here, here and here).

Book review: Continuous Enterprise Development in Java

This is a very short review of the Continuous Enterprise Development in Java book.

The book can be easily split in two parts.

The first part of the book from chapter 1 to chapter 4 javaArguillian contains general information about the difficulty of testing the JEE applications, about the software development cycles, the types of testing and some more technical details about the testing frameworks (JUnit and TestNG), about build tools like Maven and (JBoss) Forge, about version control (only Git deserves a paragraph) and (finally) about the Arquillian which is presented as “an innovative and highly extensible testing platform for the JVM”.

A very nice introduction is done to the ShrinkWrap  which is an API to create programmatically deployable JEE archives (jars, wars, ears).

An entire chapter (Chapter 3) is dedicated to write and deploy some business code and the associated Arquillian tests. Almost all the tools used in this chapter are JBoss or RedHat tools; Forge for the build of the application, JBoss Application Server to deploy the application, JBoss Developper Studio to deploy on the (Red Hat) OpenShift cloud service.

The second part of the book from chapter 4 to chapter 12 contains the implementation of the http://geekseek.continuousdev.org/app/root/show application, which is the JEE application. Every chapter is treating one aspect of the application: chapter 5  treats the persistence layer, chapter 6 the integration with NoSql data bases, chapter 7 the services layer, chapter 8 the REST services, chapter 9 the security, chapter 10 the user interface and chapter 11 the deployment on live.

Every chapter follow the same pattern, it starts with an introduction to the technology that will be used within the chapter, then the use cases and the business requirements are presented then it follows the implementation of the requirements and lastly the testing of the implementation using Arquillian.

I will conclude my ticket with a few points about what i like and what i don’t like about this book.

What i like about this book:

  • The author clearly masters the different JEE components; the technology introduction paragraphs of each chapter of the second part of the book are very clear and easy to understand.
  • The author knows the Arquillian framework inside out; all the examples are well explained and the introduction to ShrinkWrap is very well done.
  • Some of the chapters contains very valuable external references, like PicketLink for the security or Richardson Maturity Model for REST.

What i do not like about this book:

  • Too much marketing of the RedHat/JBoss products; I would have preferred to have more vendor agnostic examples of use for the Arquillian framework.
  • The subtitle of the book is “Testable Solutions with Arquillian” so it suppose to focus more on the testing part of the applications. Unfortunately, for me the book is not focusing on testing the applications but rather tries to  present how to continuously develop (JEE) applications and integration testing is only one part of this.
  • Nothing is said about the integration of Arquillian with other Java (non JEE) projects/frameworks like Spring and Guice and how Arquillian could ease (if it it can) the testing of the applications using these frameworks.

AppDynamics Pro – basics

The goal of this ticket is to present and explain the basic notions of the AppDynamics Pro product.

  • Node – a node is the basic unit of processing that AppDynamics monitors. A node is instrumented by an AppDynamics agent.
  • Tier – a tier represents an instrumented service or multiple services that perform the exact same functionality. It represents a more logical view of the application.A tier is composed of one or multiple nodes.
  • Application – multiple tiers gathered together.
  • Business Transaction – represents a distinct logical user activity. The entire application traffic is organized in Business Transactions.
  • Transaction Snapshot -set of diagnostic data, taken at a certain point in time, for a specific Business Transaction across all the tiers though which the transaction has passed. The Transaction Snapshots are triggered periodically (every 10 minutes) or automatically for the slow and error business transactions.
  • Metrics -application performance informations sent from the App Server Agents and Machine Agents to the controller.
  • Baselines – set of metrics within a time range.
  • Baseline Deviations – degree of deviation from baseline at any given point in time and by default are calculated by a number of standard deviations above the average.
  • Service Endpoint – performance metrics focused on a particular service or set of services independent of business transactions.
  • Health Rule – defines a condition or set of conditions in terms of metrics. The condition compares the performance metrics that AppDynamics collects with some static or dynamic threshold that you define. If performance exceeds the threshold, a health rule violation event is triggered. There are two types of thresholds: Warning and Critical.
  • Diagnostic Session – the goal is to collect extra Transaction Snapshots for one or more Business Transactions for a period of time.
  • Events – emitted when the application state change. Eight type of events:
    • Health rules violation
    • Too many slow transactions
    • Too many errors
    • Code problems
    • Application changes
    • JVM and CLR (.NET) Crashes
    • AppDynamics Config Warnings
    • Discovery (new application, tier or done discovered)
  • Errors – AppDynamics treat as errors the following events:
    • unhandled exceptions
    • HTTP error codes from 400 to 505 (the error codes to catch are configurable)
    • Error or Fatal logging events (Log4j or java.util.logging)
  • Information Points – collects metrics outside the context of Business Transactions and across several Business Transactions. For me it looks similar with the Service Endpoints.
  • Data Collectors – collects extra-information at the Business Transaction level like application code arguments, return values, and variables and displays the information in the Call Drill Down panels. There are two types of Data Collectors : method invocation date collectors and HTTP data collectors.