These include optimizing internal systems such as scheduling the machines that power the numerous computations done each day, as well as optimizations that affect core products and users, from online allocation of ads to page-views to automatic management of ad campaigns, and from clustering large-scale graphs to finding best paths in transportation networks. Other than employing new algorithmic ideas to impact millions of users, Google researchers contribute to the state-of-the-art research in these areas by publishing in top conferences and journals.
My analysis focused on our second largest discipline, Computer Science. The top graph summarizes the overall results of the analysis. This graph shows the Top 10 papers among those who have listed computer science as their discipline and chosen a subdiscipline.
The bars are colored according to subdiscipline and the number of readers is shown on the x-axis. The bar graphs for each paper show the distribution of readership levels among subdisciplines. Click on any graph to explore it in more detail or to grab the raw data.
A minority of Computer Scientists have listed a subdiscipline. I would encourage everyone to do so.
Latent Dirichlet Allocation available full-text LDA is a means of classifying objects, such as documents, based on their underlying topics. It turns out that interest in this paper is very strong among those who list artificial intelligence as their subdiscipline.
In fact, AI researchers contributed the majority of readership to 6 out of the top 10 papers. Presumably, those interested in popular topics such as machine learning list themselves under AI, which explains the strength of this subdiscipline, whereas papers like the Mapreduce one or the Google paper appeal to a broad range of subdisciplines, giving those papers a smaller numbers spread across more subdisciplines.
The interesting thing about this paper is that had some of the lowest readership scores of the top papers within a subdiscipline, but folks from across the entire spectrum of computer science are reading it.
The Anatomy of a large-scale hypertextual search engine available full-text In this paper, Google founders Sergey Brin and Larry Page discuss how Google was created and how it initially worked. I would expect that the largest share of readers have it in their library mostly out of curiosity rather than direct relevance to their research.
AR is the futuristic idea most familiar to the average sci-fi enthusiast as Terminator-vision. An Introduction available full-text This is another machine learning paper and its presence in the top 10 is primarily due to AI, with a small contribution from folks listing neural networks as their discipline, most likely due to the paper being published in IEEE Transactions on Neural Networks.
Reinforcement learning is essentially a technique that borrows from biology, where the behavior of an intelligent agent is is controlled by the amount of positive stimuli, or reinforcement, it receives in an environment where there are many different interacting positive and negative stimuli.
Toward the next generation of recommender systems: I would really have expected this to be at least number 3 or 4, but the strong showing by the AI discipline for the machine learning papers in spots 1, 4, and 5 pushed it down.
This paper discusses the theory of sending communications down a noisy channel and demonstrates a few key engineering parameters, such as entropy, which is the range of states of a given communication.
Convex Optimization available full-text This is a very popular book on a widely used optimization technique in signal processing. Convex optimization tries to find the provably optimal solution to an optimization problem, as opposed to a nearby maximum or minimum.
Professor Boyd has a very popular set of video classes at Stanford on the subject, which probably gave this a little boost, as well.
Videos of techniques at SciVee or JoVE or recorded lectures previously can really help spread awareness of your research. Adding the readers from this paper to the 4 paper would be enough to put it in the 2 spot, just below the LDA paper.
Well, there are a few things to note. First of all, it shows that Mendeley readership data is good enough to reveal both papers of long-standing importance as well as interesting upcoming trends. Fun stuff can be done with this!Computer and information research scientists invent and design new approaches to computing technology and find innovative uses for existing technology.
They study and solve complex problems in computing for business, medicine, science, and other fields. Research Papers ; manage, and display data.
Computer scientists build algorithms On-the-job training: None. Watch video · The process of writing incredibly efficient computer vision algorithms Listen to the latest podcast from Microsoft Research Machine learning, data mining and rethinking knowledge at KDD Read More Optimizing imperative functions in relational databases with Froid Read More.
Algorithms Research. Overview. At Vanderbilt, research on algorithms primarily deals with graph algorithms, and issues arising from the study of graph algorithms. The Analysis of Algorithms volume is characterized by the following remarks quoted from its preface. Most of the chapters in this book appeared originally as research papers that solved basic problems related to some particular algorithm or class of algorithms.
Mathematical Analysis of Algorithms [P46] The Dangers of Computer Science. How to Read a Computer Science Research Paper by Amanda Stent Where are CS research papers found? CS research papers may be published as: technical reports, conference papers, journal An engineering paper describes an implementation of an algorithm, or part or all of a computer system or application.
Engineering papers are now frequently. This article is a short guide to implementing an algorithm from a scientific paper. I have implemented many complex algorithms from books and scientific publications, and this article sums up what I have learned while searching, reading, coding and debugging.