Tag: bigdata
La última semana tuve la oportunidad de contar la experiencia de Socialmetrix instalando y configurando clusters de Datastax Analytics en Azure. Datastax brinda una solución comercial en un bundle, conteniendo Cassandra, Spark y Solr integrados. Las charlas se dieron en Argentina Big Data Meetup. Hosted by Jampp y el Nardoz Meetup. Hosted by Medallia
Sometimes you just need data to learn how a algorithm works, to run a stress test or just to have a excuse to spin up several machines in a cluster and see how it crush the data. More often than not, it is incredibly hard to obtain data, and a few colleagues I’ve talked about had similar problem, so this post is a collection of links and references for datasets I know have been open source. Please contribute =)
We have been using Spark Standalone deploy for more than one year now, but recently I tried to use Azure’s HDInsight which runs on Hadoop 2.6 (YARN deploy).
After provisioning the servers, all small tests worked fine, I have been able to run Spark-Shell, read and write to Blob Storage, until I tried to write to Datastax Cassandra cluster which constantly returned a error message: Exception in thread "main" java.io.IOException: Failed to open native connection to Cassandra at {10.0.1.4}:9042
Durante el mes de Agosto, Juan Pampliega y yo recibimos la invitación para armar un taller de Big Data en el Espacio Fundación Telefonica como un complemento a la exposición “Big Bang Data”. Este post es un resumen del evento y las referencias de lectura para los que no tuvieran la oportunidad de participar.
Recently I built an environment to help me to teach Apache Spark, my initial thoughts were to use Docker but I found some issues specially when using older machines, so to avoid more blockers I decided to build a Vagrant image and also complement the package with Apache Zeppelin as UI.
This Vagrant will build on Debian Jessie, with Oracle Java, Apache Spark 1.4.1 and Zeppelin (from the master
branch).
Esta charla describe la experiencia de Socialmetrix con más de un año usando Apache Spark en producción, las razones que nos llevaron al cambio de Hadoop+Hive a Spark y los hechos que tomamos en cuenta para soportaron la toma de esta decisión.
Esta charla fue presentada el 30 de Abril en la Conferencia de Arquitectura IT (ArqConf 2015). En la misma presenté las características deseadas, los desafíos y una propuesta de arquitectura para Big Data Analytics que permite computación en real-time.