<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>1 | Hongda Sun</title><link>https://sunhongda98.netlify.app/publication-type/1/</link><atom:link href="https://sunhongda98.netlify.app/publication-type/1/index.xml" rel="self" type="application/rss+xml"/><description>1</description><generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Wed, 01 Feb 2023 00:00:00 +0000</lastBuildDate><item><title>ConvNTM: Conversational Neural Topic Model</title><link>https://sunhongda98.netlify.app/publication/convntm/</link><pubDate>Wed, 01 Feb 2023 00:00:00 +0000</pubDate><guid>https://sunhongda98.netlify.app/publication/convntm/</guid><description>&lt;center>&lt;b>&lt;font size=5>Abstract&lt;/font>&lt;/b>&lt;/center>
&lt;p>Topic models have been thoroughly investigated for multiple years due to their great potential in analyzing and understanding texts. Recently, researchers combine the study of topic models with deep learning techniques, known as Neural Topic Models (NTMs). However, existing NTMs are mainly tested based on general document modeling without considering different textual analysis scenarios. We assume that there are different characteristics to model topics in different textual analysis tasks. In this paper, we propose a Conversational Neural Topic Model (ConvNTM) designed in particular for the conversational scenario. Unlike the general document topic modeling, a conversation session lasts for multiple turns: each short-text utterance complies with a single topic distribution and these topic distributions are dependent across turns. Moreover, there are roles in conversations, a.k.a., speakers and addressees. Topic distributions are partially determined by such roles in conversations. We take these factors into account to model topics in conversations via the multi-turn and multi-role formulation. We also leverage the word co-occurrence relationship as a new training objective to further improve topic quality. Comprehensive experimental results based on the benchmark datasets demonstrate that our proposed ConvNTM achieves the best performance both in topic modeling and in typical downstream tasks within conversational research (i.e., dialogue act classification and dialogue response generation).&lt;/p></description></item><item><title>Debiased, Longitudinal and Coordinated Drug Recommendation through Multi-Visit Clinic Records</title><link>https://sunhongda98.netlify.app/publication/drugrec/</link><pubDate>Tue, 01 Nov 2022 00:00:00 +0000</pubDate><guid>https://sunhongda98.netlify.app/publication/drugrec/</guid><description>&lt;center>&lt;b>&lt;font size=5>Abstract&lt;/font>&lt;/b>&lt;/center>
&lt;p>AI-empowered drug recommendation has become an important task in healthcare research areas, which offers an additional perspective to assist human doctors with more accurate and more efficient drug prescriptions. Generally, drug recommendation is based on patients&amp;rsquo; diagnosis results in the electronic health records. We assume that there are three key factors to be addressed in drug recommendation: 1) elimination of recommendation bias due to limitations of observable information, 2) better utilization of historical health condition and 3) coordination of multiple drugs to control safety. To this end, we propose DrugRec, a causal inference based drug recommendation model. The causal graphical model can identify and deconfound the recommendation bias with front-door adjustment. Meanwhile, we model the multi-visit in the causal graph to characterize a patient&amp;rsquo;s historical health conditions. Finally, we model the drug-drug interactions (DDIs) as the propositional satisfiability (SAT) problem, and solving the SAT problem can help better coordinate the recommendation. Comprehensive experiment results show that our proposed model achieves state-of-the-art performance on the widely used datasets MIMIC-III and MIMIC-IV, demonstrating the effectiveness and safety of our method.&lt;/p></description></item><item><title>Stylized Dialogue Generation with Multi-Pass Dual Learning</title><link>https://sunhongda98.netlify.app/publication/mpdl/</link><pubDate>Wed, 01 Dec 2021 00:00:00 +0000</pubDate><guid>https://sunhongda98.netlify.app/publication/mpdl/</guid><description>&lt;center>&lt;b>&lt;font size=5>Abstract&lt;/font>&lt;/b>&lt;/center>
&lt;p>Stylized dialogue generation, which aims to generate a given-style response for
an input context, plays a vital role in intelligent dialogue systems. Considering
there is no parallel data between the contexts and the responses of target style S1,
existing works mainly use back translation to generate stylized synthetic data for
training, where the data about context, target style S1 and an intermediate style S0
is used. However, the interaction among these texts is not fully exploited, and the
pseudo contexts are not adequately modeled. To overcome the above difficulties,
we propose multi-pass dual learning (MPDL), which leverages the duality among
the context, response of style S1 and response of style S0. MPDL builds mappings
among the above three domains, where the context should be reconstructed by
the MPDL framework, and the reconstruction error is used as the training signal.
To evaluate the quality of synthetic data, we also introduce discriminators that
effectively measure how a pseudo sequence matches the specific domain, and the
evaluation result is used as the weight for that data. Evaluation results indicate that
our method obtains significant improvement over previous baselines.&lt;/p></description></item></channel></rss>