ebababi
Mutters about programming et cetera

My 2¢ to Those Interacting with FANN

Summertime at last! As always, I needed an interesting topic to explore as a side-project (in parallel to my efforts to rest), and I thought of neural networks. After all, AI is catchy these days 😜 I tried to keep the side-project scope small: just take some social networks shares, encode a bunch of their attributes in a neural network and, after training, try to predict the social impact of these shares. So cool, right?

Doing a quick research on libraries implementing artificial neural networks (ANN), I discovered the very mature Fast Artificial Neural Network (FANN) library and its Ruby bindings, the RubyFann gem. Hastily, I grabbed some social network postings and trained a network:

# Load training postings and run NN training.
postings = Posting.where(posted_at: 1.year.ago..1.month.ago)

puts "Starting NN training with #{postings.count} postings..."
lap = Benchmark.ms do
  train = RubyFann::TrainData.new(
    inputs: postings.map { |p| convert_posting_to_input(p) },
    desired_outputs: postings.map { |p| convert_posting_to_output(p) }
  )
  # Train for 10,000 (max) epochs, 1,000 errors between reports,
  # and 0.01 desired MSE (mean square error).
  fann.train_on_data(train, 10_000, 1_000, 0.01)
end
puts 'Finished NN training in ' \
     "#{ActiveSupport::NumberHelper.number_to_rounded(lap)}ms."

fann.save(fann_file.to_s) and puts 'Stored NN in cache.'

Nothing technically fancy here. All the hard work to set up and train a neural network was made by FANN! Now we can focus on what really matters: deciding which attributes of the social postings will be used as inputs (or outputs) and encoding them.

Variables encoding

By attributes encoding I mean that we have to convert the attributes values into a numeric form usable by the neural network. That might seem easy for quantitative properties, like “number of characters in posting.” But how would you encode a categorical variable like “social posting platform” taking values such as Facebook, Twitter, LinkedIn etc? In such cases, you have to resist the impulse to assign numbers to each discrete value (i.e. treat the property as ordinal), and rather use 1-of-C coding1 [or 1-of-(C-1) in some cases]. Which means that a separate binary variable has to be used for each discrete value (category) the property might get. Therefore, I used a separate attribute for each social network setting 1 to the posting network and 0 to all other networks:

# Returns an array of integers to be fed as inputs to the NN.
def convert_posting_to_input(posting)
  [
    posting.text.size, # Text size
    posting.text =~ %r{https?://} ? 1 : 0, # Text contains links
    # TODO: Encode more attributes of social postings.
  ].concat(
    %w[facebook twitter linkedin].map do |platform|
      platform == posting.platform ? 1 : 0 # Platform
    end
  )
end

Accuracy measurement

Next, I thought I should measure the forecasting capabilities of the neural network: I selected some postings, took the network forecast and compared it to the actual values. Now, how could I quantify the accuracy of the predictions? Calculate the mean error, maybe? OK, so for a single point, if the forecast is 0 and the actual value is 1, what would the error be? Infinite? 🤯

Symmetric mean absolute percentage error (SMAPE or sMAPE) to the rescue! It’s an accuracy measure based on percentage (or relative) errors. In contrast to the mean absolute percentage error, SMAPE has both a lower bound and an upper bound. However, if the actual value or forecast value is 0, the value of error will boom up to the upper-limit. That’s a limitation, but it’s still better than infinity. The definition would be (using the normalized version):

Let’s apply this to the forecasted postings:

# Load a sample of postings and run NN forecasts.
postings = postings.sample(100)

puts "Starting NN testing with #{postings.count} postings..."
lap = Benchmark.ms do
  sum_of_pct_errors = [0.0] * 5
  size              = postings.size

  postings.each do |posting|
    # Calculate the forecasted values and fetch the actual metrics.
    forecast_outputs = fann.run convert_posting_to_input(posting).map(&:to_f)
    actual_outputs   = convert_posting_to_output(posting).map(&:to_f)

    # For each performance metric calculate the absolute percentage error.
    pct_errors = forecast_outputs.zip(actual_outputs).map do |forecast, actual|
      forecast, actual = forecast.to_f, actual.to_f
      next 0.0 if forecast.abs == actual.abs
      (forecast - actual).abs / (actual.abs + forecast.abs)
    end

    # For each performance metric summarize all errors.
    sum_of_pct_errors =
      sum_of_pct_errors.zip(pct_errors).map do |sum_of_pct_error, pct_error|
        sum_of_pct_error + pct_error
      end
  end

  # For each performance metric divide by the number of data points to get the
  # symmetric mean absolute percentage error (SMAPE).
  mean_of_pct_errors =
    sum_of_pct_errors.map { |sum_of_pct_error| sum_of_pct_error / size }

  # Trying to estimate an overall accuracy find the mean of the mean error
  # measurements of all performance metrics, currently unweighted.
  accuracy = 1.0 - (mean_of_pct_errors.sum / size)

  puts 'Accuracy: ' \
       "#{ActiveSupport::NumberHelper.number_to_percentage(accuracy * 100)}"
  puts 'Symmetric Mean Absolute Percentage Errors'
  puts %w[Clicks Engagements].join("\t")
  puts mean_of_pct_errors.map { |mean_of_pct_error| \
    ActiveSupport::NumberHelper.number_to_percentage(mean_of_pct_error * 100) \
  }.join("\t")
end
puts 'Finished NN testing in ' \
     "#{ActiveSupport::NumberHelper.number_to_rounded(lap)}ms."

As a future improvement, provided that the data is strictly positive, a better measure of relative accuracy can be obtained based on the log of the accuracy ratio. This measure is supposed to be easier to analyse statistically, and it has valuable symmetry and unbiasedness properties2.

Outro

My (F)ANN tips, based on what I learned experimenting with this exercise, are:

  1. Use 1-of-C coding for categorical variables encoding.
  2. Use SMAPE for measuring accuracy when data points include zeros.

Looking forward to your stories and suggestions 🙂

  1. Sarle, Warren S. “How Should Categories Be Encoded?” Neural Network FAQ, part 2 of 7: Learning, 11 Oct. 2002, ftp://ftp.sas.com/pub/neural/FAQ2.html#A_cat

  2. Tofallis, Chris. “A Better Measure of Relative Prediction Accuracy for Model Selection and Model Estimation.” Journal of the Operational Research Society, vol. 66, no. 8, 2015, pp. 1352–1362, DOI: 10.1057/jors.2014.103