Abstract is missing.
- Perspectives on data science for software engineering3-6 [doi]
- Software analytics and its application in practiceDongmei Zhang, Tao Xie 0001. 7-11 [doi]
- Seven principles of inductive software engineeringTim Menzies. 13-17 [doi]
- The need for data analysis patterns (in software engineering)Barbara Russo. 19-23 [doi]
- From software data to software theoryJim Whitehead. 25-28 [doi]
- Why theory mattersDag I. K. Sjøberg, Gunnar R. Bergersen, Tore Dybå. 29-33 [doi]
- Mining apps for anomaliesAndreas Zeller. 37-42 [doi]
- Embrace dynamic artifactsVenkatesh Prasad Ranganath. 43-46 [doi]
- Mobile app store analyticsMeiyappan Nagappan, Emad Shihab. 47-49 [doi]
- The naturalness of software ☆Earl T. Barr, Premkumar T. Devanbu. 51-55 [doi]
- Advances in release readinessPete Rotella. 57-62 [doi]
- How to tame your online servicesQingwei Lin, Jian-Guang Lou, Hongyu Zhang, Dongmei Zhang. 63-65 [doi]
- Measuring individual productivityThomas Fritz. 67-71 [doi]
- Stack traces reveal attack surfacesChristopher Theisen, Laurie Williams. 73-76 [doi]
- Visual analytics for software engineering dataZhitao Hou, Hongyu Zhang, Haidong Zhang, Dongmei Zhang. 77-80 [doi]
- Gameplay data plays nicer when divided into cohortsJeff Huang 0002. 81-84 [doi]
- A success story in applying data science in practiceAyse Bener, Burak Turhan, Ayse Tosun, Bora Caglayan, Ekrem Kocaguneli. 85-90 [doi]
- There's never enough time to do all the testing you wantKim Herzig. 91-95 [doi]
- The perils of energy miningAbram Hindle. 97-102 [doi]
- Identifying fault-prone files in large industrial software systemsElaine J. Weyuker, Thomas J. Ostrand. 103-106 [doi]
- A tailored suitOlga Baysal. 107-110 [doi]
- What counts is decisions, not numbers - Toward an analytics design sheetGünther Ruhe, Maleknaz Nayebi. 111-114 [doi]
- A large ecosystem study to understand the effect of programming languages on code qualityBaishakhi Ray, Daryl Posnett. 115-118 [doi]
- Code reviews are not for finding defects - Even established tools need occasional evaluationJacek Czerwonka. 119-122 [doi]
- InterviewsChristian Bird. 125-131 [doi]
- Look for state transitions in temporal dataReid Holmes. 133-135 [doi]
- Card-sortingThomas Zimmermann 0001. 137-141 [doi]
- Tools! Tools! We need tools!Diomidis Spinellis. 143-148 [doi]
- Evidence-based software engineeringTore Dybå, Gunnar R. Bergersen, Dag I. K. Sjøberg. 149-153 [doi]
- Which machine learning method do you need?Leandro L. Minku. 155-159 [doi]
- Structure your unstructured data first!Alberto Bacchelli. 161-168 [doi]
- Parse that data! Practical tips for preparing your raw data for analysisPhilip J. Guo. 169-173 [doi]
- Natural language processing is no free lunchStefan Wagner 0001. 175-179 [doi]
- Aggregating empirical evidence for more trustworthy decisionsDavid Budgen. 181-186 [doi]
- If it is software engineering, it is (probably) a Bayesian factorAyse Bener, Ayse Tosun. 187-191 [doi]
- Becoming GoldilocksFayola Peters. 193-197 [doi]
- The wisdom of the crowds in predictive modeling for software engineeringLeandro L. Minku. 199-204 [doi]
- Combining quantitative and qualitative methods (when mining software data)Massimiliano Di Penta. 205-211 [doi]
- A process for surviving survey design and sailing through survey deploymentTitus Barik, Emerson R. Murphy-Hill. 213-219 [doi]
- Log it all?Gail C. Murphy. 223-225 [doi]
- Why provenance mattersMichael W. Godfrey. 227-231 [doi]
- Open from the beginningGeorgios Gousios. 233-237 [doi]
- Reducing time to insightTrevor Carnahan. 239-243 [doi]
- Five steps for successMiryung Kim. 245-248 [doi]
- How the release process impacts your software analyticsBram Adams. 249-253 [doi]
- Security cannot be measuredAndrew Meneely. 255-259 [doi]
- Gotchas from mining bug reportsSascha Just, Kim Herzig. 261-265 [doi]
- Make visualization part of your analysis processStephan Diehl. 267-269 [doi]
- Don't forget the developers! (and be careful with your assumptions)Alessandro Orso. 271-275 [doi]
- Limitations and context of researchBrendan Murphy. 277-281 [doi]
- Actionable metrics are better metricsAndrew Meneely. 283-287 [doi]
- Replicated results are more trustworthyM. Shepperd. 289-293 [doi]
- Diversity in software engineering researchHarold Valdivia Garcia, Meiyappan Nagappan. 295-298 [doi]
- Once is not enoughNatalia Juristo. 299-302 [doi]
- Mere numbers aren't enoughPer Runeson. 303-307 [doi]
- Don't embarrass yourselfC. Bird. 309-315 [doi]
- Operational data are missing, incorrect, and decontextualizedAudris Mockus. 317-322 [doi]
- Data science revolution in process improvement and assessment?Markku Oivo. 323-325 [doi]
- Correlation is not causation (or, when not to scream "Eureka!")Tim Menzies. 327-330 [doi]
- Software analytics for small software companiesRomain Robbes. 331-335 [doi]
- Software analytics under the lamp post (or what star trek teaches us about the importance of asking the right questions)Nenad Medvidovic, Alessandro Orso. 337-340 [doi]
- What can go wrong in software engineering experiments?Sira Vegas, Natalia Juristo. 341-345 [doi]
- One size does not fit allThomas Zimmermann 0001. 347-348 [doi]
- While models are good, simple explanations are betterVenkatesh Prasad Ranganath. 349-352 [doi]
- The white-shirt effectLutz Prechelt. 353-357 [doi]
- Simpler questions can lead to better insightsBurak Turhan, Kari Kuutti. 359-363 [doi]
- Continuously experiment to assess values early onJürgen Münch. 365-368 [doi]
- Lies, damned lies, and analyticsMargaret-Anne D. Storey. 369-374 [doi]
- The world is your test suiteAndrew J. Ko. 375-378 [doi]