Engineering

Event Logging in a MERN Stack + Apollo GraphQL Web Application

We dive into the determining factors in developing a history event logging feature

Jonathan Lin

November 25th, 2020

As part of Pulse Analytics's ongoing effort to improve data infrastructure and integrity, we recently took on the challenge of internally tracking and displaying the history of changes to our data.

As we iterate on our internal tools application, the business analysts we work with are increasingly able to manage that data through CRUD interfaces that ensure data validity and consistency across multiple business objects. We wanted to build an event logging system accompanying those interfaces so business analysts - who are accustomed to being able to see who changed what in the data and when - could vet the data under their usual quality assurance process.

We essentially wanted something similar to the "History" feature on Jira tickets:

Jira Ticket History

During the backend team's brainstorming and research phase, the topic of event sourcing came up. Martin Fowler, author of Patterns of Enterprise Application, describes event sourcing as follows:

The fundamental idea of Event Sourcing is that of ensuring every change to the state of an application is captured in an event object, and that these event objects are themselves stored in the sequence they were applied for the same lifetime as the application state itself.

Fowler distinguishes the concept of event sourcing from merely logging changes in the application state. As per his article linked to above, event sourcing - when done comprehensively - allows for facilities such as complete rebuilding, temporal queries, and event replayability.

Early on, we saw that Kickstarter appeared to have implemented event sourcing with full history and replayability in their project d.rip, but we ruled that approach out as not being in scope for us. For our immediate use case at Pulse Analytics, we were looking for highly granular event logging -- a watered-down event sourcing narrowly focused on a specific part of our database. The question facing us was: What would be the easiest and cleanest way to log granular changes to a business object in our application with our existing MERN stack and Apollo GraphQL setup?

As I'll explain, Fowler's guidance on event sourcing still came in handy for this use case.

While I can't show the before/after of our actual codebase, I can demo the approximate approach we've taken so far with a classic Todos example app in the MERN stack and Apollo GraphQL flavor. Here’s a glimpse of the final product before we dive into some code snippets:

todos event logging Our fully functioning todos app with real-time event history

Feel free to follow along by comparing the Todos app BEFORE and AFTER the event logging implementation; instructions for running the apps are in their README files.

High-Level Pattern

After reading Fowler's example of Event Sourcing involving tracking ships, in which responsibilities are intuitively divvied up among Ship, Event, and EventProcessor objects, we decided to model our todo CUD operations in a similar fashion. On each operation, we decided we'd instantiate a Todo that a specific type of Event would take in, which an EventProcessor would then process.

We determined the implementation for updateTodo would look something like:

// instantiate a Todo object with the incoming new data; during this instantiation, have the Todo object also fetch its most recent state in the database
const todo = new Todo(newData)

// instantiate an update todo event with metadata and the todo; during this instantiation, have the event diff the prev state of the todo with the current state of the todo to record what fields and field values changed, if any
const event = new TodoUpdateEvent(
  metadata, // username, userId info from the server
  todo, // the todo instance
)

// instantiate an event processor
const eventProcessor = new EventProcessor()

// have the event processor process the event, executing both the creation of a new log entry and the update operation on the `todos` collection
eventProcessor.process(event)

Building the Todo Model

For the updating of a single todo, we had been performing a straightforward findOneAndUpdate operation against our todos collection using the MongoDB Node.js driver. A similar approach had been taken with creating and deleting a todo. The resolver for updateTodo looked like the following:

const resolvers = {
  // ...other code
  Mutation: {
    updateTodo: (parent, args, context, info) => {
      const { input: { _id, description, isDone } } = args
      const { db } = context

      return db.collection('todos')
        .findOneAndUpdate(
          { _id: ObjectId(_id) },
          { $set: { description, isDone } },
          { returnOriginal: false },
        )
        .then(({ value }) => value)
    },
    // ...other mutation resolvers
  }
};

To organize things in an easier-to-understand, object-oriented, and extensible manner following Fowler's example, we needed to build a Todo model.

Just as a Ship object in his example handles its own departure and arrival at a port, we designed a Todo class -- not too different from models in Ruby on Rails or mongoose -- that handled its own creation, updating, and deletion.

Here's the rough scaffold of what we came up with. Because there are async operations performed during the todo instantiation, we used the factory init pattern described here:

class Todo {
  static async init({ data, db }) {
    // initialize a new Todo
    const todo = new Todo(INIT_TODO_SYMBOL)

    // code here for loading up the todo with data and prevData (for later diffing)

    return todo
  }

  constructor(token) {
    if (token !== INIT_TODO_SYMBOL) {
      throw new Error(
        `Can't initialize Todo via constructor; use Todo.init instead`
      )
    }
  }

  getPrevData() {
    // fetch the todo's prevData
  }

  create(session, timestamp) {
    // put MongoDB insertOne op here
  }

  update(session, timestamp) {
    // put MongoDB findOneAndUpdate op here
  }

  delete(session) {
    // put MongoDB findOneAndDelete op here
  }
}

Build Event, Todo Events, and Event Processor

Next up was building the base Event object. The most complicated part of this object was implementing the diffing util that would determine which fields and field values had changed. For the Todos application data, diffing is simple but you can imagine how diffing for more complicated data can get messy. The flat npm package came in handy for that but I won't go into that now.

The Event object came out looking like this, where entityId refers to the todo instance's _id (or in the case of a newly created todo, the staged-but-not-yet-persisted _id of that todo):

class Event {
  constructor(metadata) {
    this.timestamp = new Date()
    this.userId = metadata.userId
    this.username = metadata.username
    this.action = null // options: 'updated', 'created', or 'deleted'
    this.entityId = null // child event will set
  }

  getDeltas({ prev = {}, next = {}, excludedPaths = [] }) {
    // diff prev and next but exclude the paths in excludedPaths and return deltas
  }
}

With Event built, the objects for each CUD operation -- TodoCreateEvent, TodoUpdateEvent, and TodoDeleteEvent -- could be built on top of it, each with its own process function corresponding to its respective CUD op that the EventProcessor will execute later on.

When instantiated, each child event type executes getDeltas right away with its todo instance's prev/next data, saving those deltas for the EventProcessor to persist later on in the event log.

As an example, here's the code for TodoUpdateEvent:

class TodoUpdateEvent extends Event {
  constructor(metadata, todo) {
    super(metadata)
    this.action = UPDATE_ACTION
    this.entityId = todo.data._id

    this.entity = todo
    this.deltas = this.getDeltas()
  }

  process(session) {
    return this.entity.update(session, this.timestamp)
  }

  getDeltas() {
    return super.getDeltas({
      prev: this.entity.prevData,
      next: this.entity.data,
      excludedPaths: ['createdOn', 'updatedOn', '_id'],
    })
  }
}

As for the EventProcessor, its job is straightforward: execute the CUD op and log the operation along with the deltas.

In a MongoDB replica set, wrapping these dual ops in a transaction guarantees that both the logging of the operation and the operation itself have to succeed; otherwise, both will fail and in-progress changes will be rolled back. This makes sure the event log and the CUD ops are never out-of-sync in the case of write failure.

However, in a standalone, locally running MongoDB instance, which is what’s being used for our sample Todos app, transactions are not supported. So for demo purposes, we can overlook that for now.

Here's the scaffold for the EventProcessor:

class EventProcessor {
  async process({ event, db, session }) {
    // if nothing changed, return the entity's prev data without logging
    // and without performing any CRUD action
    if (_.isEmpty(event.deltas)) {
      console.log(`No-op: event wasn't processed because no deltas`)
      return event.entity.prevData
    }

    const [result] = await Promise.all([
      event.process(session),
      this.log({ event, db, session }),
    ])

    return result
  }

  async log({ event, db, session }) {
    // persist the event to the logs collection
  }
}

Closing Thoughts

While we're still at an experimentation phase with event logging and transitioning our Apollo GraphQL setup to handle it, it's been very cool to see how implementing this pattern has allowed us to track history in a fairly complete and organized way and display that to our users.

One aspect we're still in the process of figuring out is if distributing responsibilities the way we have among the different classes -- Event types, EventProcessor, and business object models -- could be improved. We're also considering experimenting with mongoose instead of doing our own object modeling manually, and thinking about how that could streamline our workflow for event logging.

We hope you've found this an interesting read. If you’d like to join a team that’s always striving to build out innovative ways to improve data infrastructure and integrity, check out our openings!

Pulse Digital Logo
© Copyright2026 Pulse Analytics, LLC. All Rights Reserved