Third Transparency Report (Q1 2017)

What has been done since the last report?

  • We have released a new AI which features much smaller delay between actions (such as match-votes) and block of spammers.
  • We have started to work on a new spam-detection engine which we call “entity reputation”. It is expected to go live in late Q2/2017. This engine allows us to taint the various peculiarities of different spam campaigns which makes it easier to detect similar campaigns in the future.
  • We rolled out SMS verification at full scale.

How does a LOVOO user become a spammer?

It is important to know beforehand that with the term spam we are referring to fake profiles as well as spam and scam, even though the individual groups display different behavioural patterns. There are different ways for LOVOO users to become spammers:

  • For a start, users have the option of reporting other users. If a user has been reported several times, an automatic test is performed and the user is blocked (i.e. marked as a spammer). He or she can then no longer use the app and is no longer visible to other users.
  • A second possibility is blocking by our anti-spam system. It automatically identifies when a user is a spammer and blocks them immediately.
  • Moreover, our support team does its best to respond to reports as quickly as possible. This way we ensure that spammers are blocked as early on as possible before the critical quantity of reports is reached.

In our previous report, 0.2% of our users were spammers. We were able to sustain this low value throughout the first quarter. The following graph shows the ratio of spammers to users active on a daily basis (DAU – daily active user) for the period between January 2017 to March 2017:

spammers_per_dau_2017-01-01_2017-03-31.png

The green area shows all genuine users, and the red area shows those who have been identified as spammers on the respective days.

On closer inspection of February 2017, increased spammer activity can be identified, starting from the second week. While most spam campaigns rely on creating match-votes to interact with real users, this was a concentrated spam-attack which focused on creating visits, instead. However, our AI adapted quickly to this new behaviour and began blocking those profiles automatically.

spammers_per_dau_month_2017-01-01_2017-03-31.png

As users take an interest in knowing the number of spammers that interact with them, we have taken another look at the relationship between match votes initiated by spammers and all daily votes. They can trigger a disproportionately high number of actions which negatively impact our users, even if the general number of spammers in the app amounts to just 0.2%.

votes_per_dau_2017-01-01_2017-03-31.png

The graph makes it clear that spammers are not always active in the same way. On average, spammers were responsible for 3.45% of all daily match-votes in Q1/2017. This is an improvement of about 0.42 percent points to the previous quarter. Of course, our goal is 0%! The impact of spammers on daily votes is therefore greater than their number among users active on a daily basis. However, this result is not surprising: spammers and those creating fake profiles actually have the same intention, which is to lure users to another platform. Conversely, scammers want to blackmail users and get them to spend money somewhere else (for instance, through fee-based numbers).

Users have encountered more spammers this quarter

Spammers can be reported by users or identified by the anti-spam system. Our goal is to automatically block as many scammers as possible before the critical number of reports is reached or a member of the support team has to intervene manually. In the first quarter of 2017 we succeeded in automatically blocking 84.97% of all spammers with the anti-spam system. This means that we did not reach the numbers of last quarter again (~90%). The question is why. In fact, this is a direct consequence of the spam-attack in February which was concentrating on creating profile visits, instead of match-votes. While our AI was still learning about and adapting to this new behaviour, our users reported those profiles which were blocked subsequently. Now, that our AI has learned about this behaviour, we detect those profiles automatically.

users_vs_antispam_2017-01-01_2017-03-31.png

How is a report handled?

A question that often comes up is: why is a user not blocked immediately following the first report? The answer is simple: users report other users for a variety of reasons. Not all of them justify the relevant user being blocked. We receive a huge number of reports each day – there were 1,317,714 reports relating to 990,074 different users in the first quarter of 2017 alone. However, exactly 79.66% of them were not spammers.

The anti-spam system is getting quicker

We have to be very careful when we automatically block a user. On the one hand we want to catch as many spammers as possible, and on the other hand we should not make any rash judgements. Otherwise we run the risk of blocking genuine users who have only briefly exhibited spammer-like behaviour. For example, this can occur when Match is played very quickly. The anti-spam system therefore waits until a user has acted in a negative manner several times. On average, 2.1 hours elapsed between the first action, such as a vote in Match, and the actual blocking by the anti-spam system in the last quarter. This time is now just 1.1 hours thanks to our consistent efforts: A complete hour faster than before. This means that fewer users are encountering spammers since we are intercepting them earlier on.

We want to continue to demonstrate that spammers are not all the same. There are three different types of spammers:

  • Some give out Likes very slowly but over a long period of time.
  • Others give out Likes in small waves, for instance only 100 a day but within just a few minutes.
  • And others give out a high number of Likes continuously.

A continuously high quantity of Likes is identified very quickly and the relevant user is blocked within a few seconds. It can sometimes take several days for the first two types because users are active for just a very short period each day. The anti-spam system has to observe a negative behaviour several times in these users before it can finally block them in an authorised manner.

Are spammers more likely to be male or female?

Spammers on LOVOO predominantly used female profiles in the first quarter of 2017, namely in 76.82% of cases. In comparison to the last quarter, this number has risen by 7.27 percent points. The average age of spammers was 29 years old in the last quarter, both for men and women.

Stay tuned for our next Transparency Report Q2 2017!