Say hello to tweeted Social Intelligence

Monday, 12 May 2014 - 8:00am IST | Place: Mumbai | Agency: DNA

Social intelligence is understood to be the capacity to negotiate complex social relationships and environments with efficiency. A group of Indian engineers have succeeded in converting this scattered content over a social network into usable Intelligence. At Frrole, Abhishek Vaid and the rest of the team are interested in going beyond traditional analytics, moving towards understanding the"content" of tweets to provide valueable insight. Their company raised $245k led by angel investors the likes of Rajan Anandan the managing director of Google India .In an exclusive interview Abhishek shares his business idea.

What makes your approach different?
The emphasis in going beyond mere summary statistics and keyword counts makes our approach different to other social analytics companies. In a loose comparison, our approach is more similar to Google Knowledge Graph than let's say Google Analytics, but for Social Data.

Where do you see your technology being applied?
There are many verticals which can benefit from Social Intelligence. Media, retail and entertainment are to name a few. However, in the past we also had discussions with people from specific verticals like healthcare and education. Once you are able to scale technology to detect and annotate tweets (or other social data points) with enough semantic information, you can use it in novel ways to address multitude of real-world use cases across many verticals.

So what is it you are really selling?
We see ourselves as 'Data Providers' as opposed to 'Custom Application Providers'. This means that our constant effort is to build a product which serves as a platform ready to provide intelligence as per user's information. We do not see ourselves providing custom visual or dashboard based solutions to our clients. The hope is to build very flexible and parameterized API's which are able to power and address data requirements for most application relying on Social Data Integration.

How do I use it?
The current process is to generate an API key through our developer website and start to consume one or more of our API's through standard interfaces. This allows anyone to use our API's through standard web-services which currently support both JSON
and XML formats for data
exchange. The usage pattern of each API and its supported parameters is well documented on our website. In some cases where clients are not sure how to use our API's directly, they contact us and then we help them with right
API calls suited for their information need.

Why an API instead of a 'shiny interface'?
The answer is enhanced scalability, sustainability and scope of the product. As mentioned previously, there is a vast scope of applications in 'Social Data' space with many use-cases possible. A robust, intelligent
and flexible data source will go a long way in addressing most of these use-cases rather than a
fixed 'shiny interface' which might only address a few. From a business point of view, we would want to leave the
decision of visual experience(s) to the clients and let them
worry about it, since they (clients) belong to different verticals and understand best how to visualize their 'social integrations' rather than us defining it for them.

What are some of the challenges you are facing?
Scalability and product innovation are main challenges. In scalability, we currently consume only a fraction of total Twitter data stream generated every day. We would most assuredly like to scale our infrastructure and algorithms to be able to address a sizeable unit of this data. This would mean having to design and implement a very fast and reliable 'Text Processing Pipeline. This also raises database level challenges to provide a fast retrieval mechanism for clients using our API's. As for product innovation, the constant effort is to make the system more "intelligent" and "accurate". Being able to move beyond English language is also a challenges in this regime. Finally, being able to update and improve our algorithms with state-of-the-art language processing tools is a constant challenge as well.
 




Jump to comments