KAIST College of Business Seminar & Forum > Notice > News >KAIST COLLEGE OF BUSINESS
본문 바로가기 사이트 메뉴 바로가기 주메뉴 바로가기

Academic SeminarFocused Concept Miner (FCM): an Interpretable Deep Learning for Text Exploration

  • Date
  • 2018-12-21 ~ 2018-12-21
  • Time
  • 10:00 ~ 11:30
  • Place
  • SUPEX Building, 5th #Chey A
  • Department
  • School of Management Engineering
  • Major
  • IT Management
We would like to invite you to participate in Management Engineering(ME) Seminar.

1. When: December 21st (Friday), 10:00~11:20
2. Where: Chey A Hall
3. Speaker: Prof. Dokyun Lee (Carnegie Mellon University)
4. Topic: Focused Concept Miner (FCM): an Interpretable Deep Learning for Text Exploration
5. Research field: IT Management
* Lecture will be delivered in English.

We introduce the Focused Concept Miner (FCM), an interpretable deep learning text mining algorithm to (1) automatically extract interpretable “concepts” from text data, (2) “focus” the mined concepts to explain any existing user-specified business outcomes, such as purchase conversion (linked to reviews read) or crowdfunding success (linked to project descriptions), and (3) quantify the correlational relative importance of each mined concept for business outcomes, along with their relative importance to other user-specified explanatory variables. Compared to existing methods that partially achieve FCM's goals, FCM achieves higher interpretability and predictive performance. Relative importance of discovered concepts provide managers easy ways to gauge potential impact and to inform hypotheses development. We present FCM as a complimentary technique to explore and understand unstructured textual data before applying standard causal inference techniques. Applications can be found in any settings with text and structured data tied to a business outcome. We evaluate FCM’s performance on a comprehensive dataset that tracks individual-level review reading, searching, and conversion in addition to another crowdfunding data. Furthermore, we run a series of experiments to investigate the accuracy-interpretability trade-off to provide empirical observations for interpretable machine learning literature. Paper concludes with ideas for future development, potential application scenarios, and managerial implications.
Contact : Lee, Jisun ( jisunlee@kaist.ac.kr )