Keynotes
Towards a Global Brain
Tim O’Reilly (O’Reilly Media)
Tim O'Reilly (O'Reilly Media) Tim O'Reilly is the founder and CEO of O'Reilly Media Inc., thought by many to be the best computer book publisher in the world. O'Reilly Media also hosts conferences on technology topics, including the O'Reilly Open Source Convention, the Web 2.0 Summit, Strata: The Business of Data, and many others. O'Reilly's Make: magazine and Maker Faire has been compared to the West Coast Computer Faire, which launched the personal computer revolution. Tim's blog, the O'Reilly Radar "watches the alpha geeks" to determine emerging technology trends, and serves as a platform for advocacy about issues of importance to the technical community. Tim is also a partner at O'Reilly AlphaTech Ventures, O'Reilly's early stage venture firm, and is on the board of Safari Books Online.
Abstract. At the same time as we are seeing breakthrough after breakthrough in artificial intelligence, were also seeing the fulfillment of the vision of Vannevar Bush, JCR Licklider, and Doug Engelbart that computers could augment human information retrieval and problem solving. AI turned out not to be a matter of developing better algorithms, but of having enough data. The key applications of the web combine machine learning algorithms with techniques for harnessing the collective intelligence of users as captured in massive, interlinked cloud databases. Bit by bit, this is leading us towards a new kind of global brain, in which we have met the AI, and it is us. We and our devices are its senses, our databases are its memory, its habits, and even its dreams. This global brain is still a child, but as its parents, we have a responsibility to think about how best to raise it. What should we be teaching our future augmented selves? How can we make the emerging global consciousness not only more resilient, but more moral?
Is it still "Big Data" if it fits in my pocket?
David Campbell (Microsoft)
David Campbell (Microsoft) David Campbell is a Microsoft Technical Fellow working in Microsoft Corp.’s Server and Tools Business. Campbell joined Microsoft in 1994 from Digital Equipment Corp. as Microsoft began its push to become a credible enterprise software vendor. His early work at Microsoft included creating an OLE DB interface over the existing SQL Server storage engine, which helped to bootstrap SQL Server’s present-generation query processor. He also worked closely with Mohsen Agsen, another Microsoft Technical Fellow, and the Microsoft Transaction Server team to add distributed transaction support to SQL Server 6.5. Microsoft made a bold move to re-architect SQL Server for the SQL Server 7.0 release. As a key technical member of the storage engine team, Campbell implemented the SQL Server lock manager and other critical concurrency control mechanisms. He also implemented row-level locking in SQL Server 7.0, one of the hallmark features of the release.
Through the SQL Server 2000 and SQL Server 2005 releases, Campbell served in a variety of roles including product-level architect and general manager of product development. After the SQL Server 2005 release, he led a small team in redesigning SQL Server product development methodology. The new process, used to produce SQL Server 2008, resulted in SQL Server 2008 having the highest initial quality levels of any SQL Server release to date. As of August 2010, Campbell is serving as general manager of Microsoft’s Data and Modeling Group, which oversees Microsoft’s data modeling and data access strategies.
Campbell holds a number of patents in the data management, schema and software quality realms. He is also a frequent speaker at industry and research conferences on a variety of data management and software development topics. His current product development interests include cloud-scale computing, realizing value from ambient data, and multidimensional, context-rich computing experiences.
Abstract. "Big Data" is a hot topic but, in many ways, we are still trying to define what the phrase "Big Data" means. For many, there are more questions than answers at this point. Is it about size alone? complexity? variability? data shape? price/performance? new workloads? new types of users? Are existing data models, data management systems, data languages, and BI/ETL tools relevant in this space? Is MapReduce really a "major step backwards"? I have spent time over the last several years trying to answer many of these questions to my own satisfaction. As part of the journey I have witnessed a number of natural patterns that emerge in big data processing. In this talk I will present a catalog of these patterns and illustrate them across a scale spectrum from megabytes to 100s of petabytes. Finally, I will offer some thoughts around a systems and research agenda for this new world.