MICO Testing: One, Two, Three…

We’ve finally reached an important milestone in our validation work in the MICO project…we can begin testing and integrating our toolset with the first release of the platform to evaluate the initial set of media extractors. 

This blog post is more or less a diary of our first attempts in using MICO in conjunction with our toolset that includes:

  • HelixWare – the Video Hosting Platform (our online video platform that allows publishers and content providers to ingest, encode and distribute videos across multiple screens)
  • WordLift – the Semantic Editor for WordPress (assisting the editors writing a blog post and organising the website’s contents using semantic fingerprints)
  • Shoof – a UGC video recording application (this is an Android native app providing instant video-recording for people living in Cairo)

The workflow we’re planning to implement aims at improving content creation, content management and content delivery phases. 

Combined Deliverable 7.2.1 & 8.2.1 Use Cases- First Prototype

The diagram describes the various steps involved in the implementation of the scenarios we will use to run the tests. At this stage the main goal is to:

  • a) ingest videos in HelixWare,
  • b) process these videos with MICO and
  • c) add relevant metadata that will be further used by the client applications WordLift and Shoof.  

While we’re working to see MICO in action in real-world environments the tests we’ve designed aims at providing valuable feedback for the developers of each specific module in the platform.

These low-level components (called Technology Enablers or simply TE) include the extractors to analyse and annotate media files as well as modules for data querying and content recommendation. We’re planning to evaluate the TEs that are significant for our user stories and we have designed the tests around three core objectives:

  1. output accuracy​­ how accurate, detailed and meaningful each single response is when compared to other available tools;
  2. technical performance ​­ how much time each task requires and how scalable the solution is when we increase in volume the amount of contents being analysed;
  3. usability ​evaluated both in terms of integration, ​modularity ​and usefulness. ​

As of today being, everything still extremely experimental, we’re using a dedicated MICO platform running in a protected and centralised cloud environment. This machine has been installed directly by the technology partners of the project: this makes it easier for us to test and simpler for them to keep on developing, hot-fixing and stabilising the platform.    

Let’s start

By accessing the MICO Admin UI (this is accessible from the `/mico-configuration` directory), we’ve been able to select the analysis pipeline. MICO orchestrates different extractors and combines them in pipelines. At this stage the developer shall choose one pipeline at the time.  


Upon startup we can see the status of the platform by reading the command output window; while not standardised this already provides an overview on the startup of each media extractor in the pipeline.


For installing and configuring the MICO platform you can read the end-user documentation: at this stage I would recommend you to wait until everything becomes more stable (here is a link to the MICO end-user documentation)!

After starting up the system using the platform’s REST APIs we’ve been able to successfully send the first video files and request the processing of it. This is done mainly in three steps:

1. Create a Content Item
curl -X POST http://<mico_platform>/broker/inject/create


2. Create a Content Part
curl -X POST “http://

<mico_platform>/broker/inject/add?ci=http%3A%2F%2Fdemo2.mico-project.eu%3A8080%2Fmarmotta%2F322e04a3-33e9-4e80-8780-254ddc542661&type=video%2Fmp4&name=horses.mp4″ –data-binary @Bates_2045015110_512kb.mp4



3. Submit for processing
curl -v -X POST “http://


HTTP/1.1 200 OK

Server: Apache-Coyote/1.1

Content-Length: 0

Date: Wed, 08 Jul 2015 08:08:11 GMT

In the next blog posts we will see how to consume the data coming from MICO and how this data will be integrated in our application workflows.

In the meantime, if you’re interested in knowing more about MICO and how it could benefit your existing applications you can read:

Stay tuned for the next blog post!


One Week at the Monastery building WordLift.

Last week our team has gathered for a focused face-to-face session in a remote location of Abruzzo right in the center of Italy: the  Abbey of Santo Spirito d’Ocre. Like early agile-teams we’ve discovered once again the importance of working together in close proximity and the value of keeping a healthy balance between hard work and quality of life. We’ve also found ourselves in love with the product we’re building and happy to share it with others. 

Our business is made of small distributed teams organised in startups each one self-sufficient and focused on a specific aspect of our technology (we help business and individuals manage and organise contents being text, data, images or videos).

As peers gathered from different time zones in this unique little monastery of cistercian order we began executing our plan.

And yes, getting to the meeting with a clear agenda did help.  All the issues we had in our list had been summarised and shared in a dedicated Trello board. These included mainly the work we’ve been doing in the last years on WordLift our semantic editor for WordPress.

Cistercians (at least in the old days) kept manual labour as a central part of their monastic life. In our case we’ve managed to structure most of our days around three core activities: business planningbug fixing and software documentation. At the very basis we’ve kept the happiness of working together around something we like.

Emphatic vs Lean: setting up the Vision.

Most of the work in startups is governed by lean principles, the tools and the mindset of the people have been strongly influenced by the work of Eric Ries who first published a book to promote the lean approach and to share his recipe for continuos innovation and business success.

After over three years of work on WordLift we can say that we’ve worked in a complete different way. Lean demands teams to get out of the building and look for problems to be solved. In our case, while we’ve spent most of our time “out of the building” (and with our clients) we’ve started our product journey from a technology, inspired by the words of Tim Berners Lee on building a web for open, linked data and we’ve studied all possible ways to create an emotional connection with bloggers and writers.

Not just that, we have also dedicated time in analyzing the world of journalism, its changes and how it will continue evolving according to journalistic celebrities like David Carr (a famous journalist of the New York Times who died early on this year) and many others like him as well as the continuously emerging landscape of news algorithms that help people reach the content they want.

Establish the Vision of WordLiftUnderstanding that WordLift, and the go-to-market process that will follow shall be empathy-driven rather than lean is one of the greatest outcome of our “monastic” seminar in the valley of L’Aquila.

By using an empathy-driven expression: we’ve finally set the Vision.

Organise the Workflow: getting things done.

As most of today’s open-source software, WordLift is primarily built over GitHub.

While GitHub can be used and seen as a programming tool, GitHub – being the largest digital space for collaborative works – embeds your workflow.

While working at the monastery we’ve been able to discuss, test and implement a GitFlow workflow

The Gitflow Workflow is designed around a strict branching model for project releases. While somehow complicated in the beginning we see it as a robust framework for continuing the development of WordLift and for doing bug fixing without waiting for the next release cycle.

Documentation. Documentation. Documentation.

Writing (hopefully good) software documentation helps the users of any product and shall provide a general understanding of core features and functions.

The reality is that, when documenting your own software, the advantages go way beyond the original aim of helping end-users.

By updating WordLift documentation we were able to get a clearer overview of all actions required by an editor in composing his/her blog post and/or in creating and publishing the knowledge graph. We also have been able to detect flows in the code and in the user experience.

Most importantly we’ve found that writing the documentation (and this is also done collaboratively over GitHub) can be a great way to keep everyone in sync between the “Vision” of the product (how to connect with our usersand the existing implementation (what are we offering with this release).

Next steps

Now it’s time to organise the team and start the next iterations by engaging with the first users of this new release while fine-tuning the value proposition (below the emphatic view of @mausa89 on the USP of WordLift).


As this is the very first blog post I’m writing with WordLift v3 I can tell you I’m very happy about it and if you would like to test it too join our newsletter...we will keep you posted while continue playing!


Loading player...

The closing of this blog post is @colacino_m‘s night improvisation from the monastery.


WordLift powered by MICO at the European Semantic Web Conference 2015

WordLift powered by MICO at the European Semantic Web Conference 2015
Yes! Time to start presenting WordLift v3 to the world and how we’re planning to help Greenpeace Italy with MICO cross-media analysis and… a lot more.



User-Generated-Content for News and Media.

User-Generated-Content for News and Media.
User-generated content (UGC) play an amazing role on-air and online in our every day information diet. … 


WordLift Hackathon

There is no better way for a startup like us to kickstart the new year with an hackathon for creating WordLift’s product roadmap. … 


An interactive visualisation of events in SalzburgerLand

A Leaflet based geographical visualisation pulling events data from data.salzbugerland.com – Proudly powered by Redlink and WordLift. … 




What do we need cross-media analysis for?

What do we need cross-media analysis for?

At InSideOut10 our mission is to create user-engaging experiences to help people interact with digital contents… 


InSideOut.Today brand reload at the Mashable Social Media Day in Egypt!

Our partner Fady Ramzy was one of the speakers at Mashable #SMDay2014 event in Cairo. … 


Surreal Gallery

It’s time to change skin here in Cairo. More news coming. … 


“The value of art is to heal. We make art because our society is evil and we need to heal it. Can we really change the world? We can’t.” – Alejandro Jodorowsky

“The value of art is to heal. We make art because our society is evil and we need to heal it. Can we really change the world? We can’t.” – Alejandro Jodorowsky

This year at SXSW Matteoc had a remarkable experience and witnessed a conversation with Alejandro Jodorowsky. … 


A private snapshot of #SXSWInteractive 2014

A private and quite narrow collection of stories we’ve been collecting in Austin this year for #SXSWinteractive 2014. …