What is a Event-driven architecture?
An event-driven architecture consists of event creators and event consumers. The creator, which is the source of the event, only knows that the event has occurred. Consumers are entities that need to know the event has occurred; they may be involved in processing the event or they may simply be affected by the event.
Event consumers typically subscribe to some type of middleware event manager. When the manager receives notification of an event from a creator, it forwards that event to all registered consumers. The benefit of an event-driven architecture is that it enables large numbers of creators and consumers to exchange status and response information in near real-time.
Many programs spend most of their time waiting for something to happen. This is especially true for computers that work directly with humans, but it’s also common in areas like networks. Sometimes there’s data that needs processing, and other times there isn’t.
The event-driven architecture helps manage this by building a central unit that accepts all data and then delegates it to the separate modules that handle the particular type. This handoff is said to generate an “event,” and it is delegated to the code assigned to that type.
Overall, event-driven architectures:
- Are easily adaptable to complex, often chaotic environments
- Scale easily
- Are easily extendable when new event types appear
- Testing can be complex if the modules can affect each other. While individual modules can be tested independently, the interactions between them can only be tested in a fully functioning system.
- Error handling can be difficult to structure, especially when several modules must handle the same events.
- When modules fail, the central unit must have a backup plan.
- Messaging overhead can slow down processing speed, especially when the central unit must buffer messages that arrive in bursts.
- Developing a systemwide data structure for events can be complex when the events have very different needs.
- Maintaining a transaction-based mechanism for consistency is difficult because the modules are so decoupled and independent.
- Asynchronous systems with asynchronous data flow
- Applications where the individual data blocks interact with only a few of the many modules
- User interfaces
Many applications have a core set of operations that are used again and again in different patterns that depend upon the data and the task at hand. The popular development tool Eclipse, for instance, will open files, annotate them, edit them, and start up background processors. The tool is famous for doing all of these jobs with Java code and then, when a button is pushed, compiling the code and running it.
- Deciding what belongs in the microkernel is often an art. It ought to hold the code that’s used frequently.
- The plug-ins must include a fair amount of handshaking code so the microkernel is aware that the plug-in is installed and ready to work.
- Modifying the microkernel can be very difficult or even impossible once a number of plug-ins depend upon it. The only solution is to modify the plug-ins too.
- Choosing the right granularity for the kernel functions is difficult to do in advance but almost impossible to change later in the game.
- Tools used by a wide variety of people
- Applications with a clear division between basic routines and higher order rules
- Applications with a fixed set of core routines and a dynamic set of rules that must be updated frequently
- The services must be largely independent or else interaction can cause the cloud to become imbalanced.
- Not all applications have tasks that can’t be easily split into independent units.
- Performance can suffer when tasks are spread out between different microservices. The communication costs can be significant.
- Too many microservices can confuse users as parts of the web page appear much later than others.
- Websites with small components
- Corporate data centers with well-defined boundaries
- Rapidly developing new businesses and web applications
- Development teams that are spread out, often across the globe
- Transactional support is more difficult with RAM databases.
- Generating enough load to test the system can be challenging, but the individual nodes can be tested independently.
- Developing the expertise to cache the data for speed without corrupting multiple copies is difficult.
- High-volume data like click streams and user logs
- Low-value data that can be lost occasionally without big consequences—in other words, not bank transactions
- Social networks