May  2022, 2(2): 157-172. doi: 10.3934/steme.2022011

Personalized exercise recommendation method based on causal deep learning: Experiments and implications

1. 

Department of Computer Science, Changchun Humanities and Sciences College, Changchun, 130117, China; wangsuhua@ccrw.edu.cn; mazq@nenu.edu.cn; zhaodawei@ccrw.edu.cn

2. 

School of Information Science and Technology, Northeast Normal University, Changchun 130117, China; jihj328@nenu.edu.cn; liut790@nenu.edu.cn; chenaq669@nenu.edu.cn

* Correspondence: Email: zhaodawei@ccrw.edu.cn; Tel: +86-431-84536338

Academic Editor: Jun Shen

Received  December 2021 Accepted  May 2022 Published  June 2022

The COVID-19 pandemic has accelerated innovations for supporting learning and teaching online. However, online learning also means a reduction of opportunities in direct communication between teachers and students. Given the inevitable diversity in learning progress and achievements for individual online learners, it is difficult for teachers to give personalized guidance to a large number of students. The personalized guidance may cover many aspects, including recommending tailored exercises to a specific student according to the student′s knowledge gaps on a subject. In this paper, we propose a personalized exercise recommendation method named causal deep learning (CDL) based on the combination of causal inference and deep learning. Deep learning is used to train and generate initial feature representations for the students and the exercises, and intervention algorithms based on causal inference are then applied to further tune these feature representations. Afterwards, deep learning is again used to predict individual students′ score ratings on exercises, from which the Top-N ranked exercises are recommended to similar students who likely need enhancing of skills and understanding of the subject areas indicated by the chosen exercises. Experiments of CDL and four baseline methods on two real-world datasets demonstrate that CDL is superior to the existing methods in terms of capturing students′ knowledge gaps in learning and more accurately recommending appropriate exercises to individual students to help bridge their knowledge gaps.

Citation: Suhua Wang, Zhiqiang Ma, Hongjie Ji, Tong Liu, Anqi Chen, Dawei Zhao. Personalized exercise recommendation method based on causal deep learning: Experiments and implications. STEM Education, 2022, 2 (2) : 157-172. doi: 10.3934/steme.2022011
References:
[1]

Vie, J. and Kashima, H., Knowledge tracing machines: Factorization machines for knowledge tracing. The Thirty-Third AAAI Conference on Artificial Intelligence, 2019, 33(01): 750–757. https://doi.org/10.1609/aaai.v33i01.3301750 doi: 10.1609/aaai.v33i01.3301750.

[2]

Walker, A., Recker, M. and Lawless, K., Collaborative information filtering: A review and an educational application. International Journal of Artificial Intelligence in Education, 2004, 14(1): 3–28.

[3]

Hsu, M., A personalized English learning recommender system for ESL students. Expert Systems with Applications, 2008, 34(1): 683–688. https://doi.org/10.1016/j.eswa.2006.10.004 doi: 10.1016/j.eswa.2006.10.004.

[4]

Milicevic, A., Vesin, B. and Ivanovic, M., E-learning personalization based on hybrid recommendation strategy and learning style identification. Computers & Education, 2011, 56(3): 885–899. https://doi.org/10.1016/j.compedu.2010.11.001 doi: 10.1016/j.compedu.2010.11.001.

[5]

Segal, A., Katzir, Z. and Shapira, B., EduRank: A collaborative filtering approach to personalization in e-learning. Proceedings of the 7th International Conference on Educational Data Mining, 2014, 68–75.

[6]

Toledo, R. and Mota, Y., An e-learning collaborative filtering approach to suggest problems to solve in programming online judges. International Journal of Distance Education Technologies, 2014, 12(2): 51–65. https://doi.org/10.4018/ijdet.2014040103 doi: 10.4018/ijdet.2014040103.

[7]

Wu, D., Lu, J. and Zhang G., A fuzzy tree matching-based personalized e-learning recommender system. IEEE Transactions on Fuzzy Systems, 2015, 23(6): 2412–2426. https://doi.org/10.1109/TFUZZ.2015.2426201 doi: 10.1109/TFUZZ.2015.2426201.

[8]

Dwivedi, P. and Bharadwaj, K., Effective trust-aware e-learning recommender system based on learning styles and knowledge levels. Journal of Educational Technology & Society, 2013, 16(4): 201–216.

[9]

Jiang, C., Feng, J. and Sun, X., Personalized exercises recommendation algorithm based on knowledge hierarchical graph, ReKHG. Computer Engineering and Applications, 2018, 54(10): 234–240.

[10]

Gong, T. and Yao, X., Deep exercise recommendation model. International Journal of Modeling and Optimization, 2019, 9(1): 18–23. https://doi.org/10.7763/IJMO.2019.V9.677 doi: 10.7763/IJMO.2019.V9.677.

[11]

Piech, C., Bassen, J. and Huang, J., Deep knowledge tracing. NIPS'15 Proceedings of the 28th International Conference on Neural Information Processing Systems, 2015, 505–513.

[12]

Zhang, L., Xiong, X. and Zhao, S., Incorporating rich features into deep knowledge tracing. Proceedings of the Fourth ACM Conference on Learning at Scale, 2017, 169–172. https://doi.org/10.1145/3051457.3053976 doi: 10.1145/3051457.3053976.

[13]

Su, Y., Liu, Q. and Liu, Q., Exercise-enhanced sequential modeling for student performance prediction. Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI-18), 2018, 2435–2443.

[14]

Wang, L., Angela, S. and Liu L., Deep knowledge tracing on programming exercises. Proceedings of the Fourth. ACM Conference on Learning at Scale, 2017, 201–204. https://doi.org/10.1145/3051457.3053985 doi: 10.1145/3051457.3053985.

[15]

Yeung, C. and Yeung, D., Addressing two problems in deep knowledge tracing via prediction-consistent regularization. Proceedings of the Fifth Annual ACM Conference on Learning at Scale, 2018, 41–50. https://doi.org/10.1145/3231644.3231647 doi: 10.1145/3231644.3231647.

[16]

Didelez, V. and Pigeot, I., Judea Pearl: Causality: Models, reasoning, and inference. Politische Vierteljahresschrift, 2001, 42(2): 313–315. https://doi.org/10.1007/s11615-001-0048-3 doi: 10.1007/s11615-001-0048-3.

[17]

Louizos, C., Shalit, U. and Mooij, J., Causal effect inference with deep latent-variable models. Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, 6449–6459.

[18]

Van, D., Causal reasoning and inference making in judging the importance of story statements. Child Development, 1989, 60(2): 286–297.

[19]

Joachims, T., Swaminathan, A. and De, R., Deep learning with logged bandit feedback. International Conference on Learning Representations, 2018, 1–12.

[20]

Swaminathan, A. and Joachims, T., Counterfactual risk minimization: Learning from logged bandit feedback. Proceedings of the 32nd International Conference on Machine Learning, 2015, 814–823. https://doi.org/10.1145/2740908.2742564 doi: 10.1145/2740908.2742564.

[21]

Swaminathan, A. and Joachims, T., The self-normalized estimator for counterfactual learning. Advances in Neural Information Processing Systems, 2015, 3231–3239.

[22]

Lisa, W., Angela, S., Larry. L. and Chris, P., Deep knowledge tracing on programming exercises. Proceedings of the Fourth ACM Conference on Learning, 2017, 201–204. https://doi.org/10.1145/3051457.3053985 doi: 10.1145/3051457.3053985.

[23]

Guy, S. and Asela, G., Evaluating recommendation systems. Recommender Systems Handbook, 2011, 257–297. https://doi.org/10.1007/978-0-387-85820-3_8 doi: 10.1007/978-0-387-85820-3_8.

[24]

Gang, L. and Tianyong, H., User-based question recommendation for question answering system. International Journal of Information and Education Technology, 2012, 2(3): 243–246. https://doi.org/10.7763/IJIET.2012.V2.120 doi: 10.7763/IJIET.2012.V2.120.

[25]

Shah, K., Zafar, A. and Irfan, U., Recommender systems: Issues, challenges, and research opportunities. Information science and applications (ICISA) 2016, 2016, 1179–1189. https://doi.org/10.1007/978-981-10-0557-2_112 doi: 10.1007/978-981-10-0557-2_112.

[26]

Ming, Z., De-sheng, Z., Ran, T., You-Qun, S., Xiang, Y. and Qian, W., Top-N collaborative filtering recommendation algorithm based on knowledge graph embedding. Proceedings of the 14th International Conference of the Knowledge Management in Organizations, 2019, 122–134. https://doi.org/10.1007/978-3-030-21451-7_11 doi: 10.1007/978-3-030-21451-7_11.

show all references

References:
[1]

Vie, J. and Kashima, H., Knowledge tracing machines: Factorization machines for knowledge tracing. The Thirty-Third AAAI Conference on Artificial Intelligence, 2019, 33(01): 750–757. https://doi.org/10.1609/aaai.v33i01.3301750 doi: 10.1609/aaai.v33i01.3301750.

[2]

Walker, A., Recker, M. and Lawless, K., Collaborative information filtering: A review and an educational application. International Journal of Artificial Intelligence in Education, 2004, 14(1): 3–28.

[3]

Hsu, M., A personalized English learning recommender system for ESL students. Expert Systems with Applications, 2008, 34(1): 683–688. https://doi.org/10.1016/j.eswa.2006.10.004 doi: 10.1016/j.eswa.2006.10.004.

[4]

Milicevic, A., Vesin, B. and Ivanovic, M., E-learning personalization based on hybrid recommendation strategy and learning style identification. Computers & Education, 2011, 56(3): 885–899. https://doi.org/10.1016/j.compedu.2010.11.001 doi: 10.1016/j.compedu.2010.11.001.

[5]

Segal, A., Katzir, Z. and Shapira, B., EduRank: A collaborative filtering approach to personalization in e-learning. Proceedings of the 7th International Conference on Educational Data Mining, 2014, 68–75.

[6]

Toledo, R. and Mota, Y., An e-learning collaborative filtering approach to suggest problems to solve in programming online judges. International Journal of Distance Education Technologies, 2014, 12(2): 51–65. https://doi.org/10.4018/ijdet.2014040103 doi: 10.4018/ijdet.2014040103.

[7]

Wu, D., Lu, J. and Zhang G., A fuzzy tree matching-based personalized e-learning recommender system. IEEE Transactions on Fuzzy Systems, 2015, 23(6): 2412–2426. https://doi.org/10.1109/TFUZZ.2015.2426201 doi: 10.1109/TFUZZ.2015.2426201.

[8]

Dwivedi, P. and Bharadwaj, K., Effective trust-aware e-learning recommender system based on learning styles and knowledge levels. Journal of Educational Technology & Society, 2013, 16(4): 201–216.

[9]

Jiang, C., Feng, J. and Sun, X., Personalized exercises recommendation algorithm based on knowledge hierarchical graph, ReKHG. Computer Engineering and Applications, 2018, 54(10): 234–240.

[10]

Gong, T. and Yao, X., Deep exercise recommendation model. International Journal of Modeling and Optimization, 2019, 9(1): 18–23. https://doi.org/10.7763/IJMO.2019.V9.677 doi: 10.7763/IJMO.2019.V9.677.

[11]

Piech, C., Bassen, J. and Huang, J., Deep knowledge tracing. NIPS'15 Proceedings of the 28th International Conference on Neural Information Processing Systems, 2015, 505–513.

[12]

Zhang, L., Xiong, X. and Zhao, S., Incorporating rich features into deep knowledge tracing. Proceedings of the Fourth ACM Conference on Learning at Scale, 2017, 169–172. https://doi.org/10.1145/3051457.3053976 doi: 10.1145/3051457.3053976.

[13]

Su, Y., Liu, Q. and Liu, Q., Exercise-enhanced sequential modeling for student performance prediction. Proceedings of the 32nd AAAI Conference on Artificial Intelligence (AAAI-18), 2018, 2435–2443.

[14]

Wang, L., Angela, S. and Liu L., Deep knowledge tracing on programming exercises. Proceedings of the Fourth. ACM Conference on Learning at Scale, 2017, 201–204. https://doi.org/10.1145/3051457.3053985 doi: 10.1145/3051457.3053985.

[15]

Yeung, C. and Yeung, D., Addressing two problems in deep knowledge tracing via prediction-consistent regularization. Proceedings of the Fifth Annual ACM Conference on Learning at Scale, 2018, 41–50. https://doi.org/10.1145/3231644.3231647 doi: 10.1145/3231644.3231647.

[16]

Didelez, V. and Pigeot, I., Judea Pearl: Causality: Models, reasoning, and inference. Politische Vierteljahresschrift, 2001, 42(2): 313–315. https://doi.org/10.1007/s11615-001-0048-3 doi: 10.1007/s11615-001-0048-3.

[17]

Louizos, C., Shalit, U. and Mooij, J., Causal effect inference with deep latent-variable models. Proceedings of the 31st International Conference on Neural Information Processing Systems, 2017, 6449–6459.

[18]

Van, D., Causal reasoning and inference making in judging the importance of story statements. Child Development, 1989, 60(2): 286–297.

[19]

Joachims, T., Swaminathan, A. and De, R., Deep learning with logged bandit feedback. International Conference on Learning Representations, 2018, 1–12.

[20]

Swaminathan, A. and Joachims, T., Counterfactual risk minimization: Learning from logged bandit feedback. Proceedings of the 32nd International Conference on Machine Learning, 2015, 814–823. https://doi.org/10.1145/2740908.2742564 doi: 10.1145/2740908.2742564.

[21]

Swaminathan, A. and Joachims, T., The self-normalized estimator for counterfactual learning. Advances in Neural Information Processing Systems, 2015, 3231–3239.

[22]

Lisa, W., Angela, S., Larry. L. and Chris, P., Deep knowledge tracing on programming exercises. Proceedings of the Fourth ACM Conference on Learning, 2017, 201–204. https://doi.org/10.1145/3051457.3053985 doi: 10.1145/3051457.3053985.

[23]

Guy, S. and Asela, G., Evaluating recommendation systems. Recommender Systems Handbook, 2011, 257–297. https://doi.org/10.1007/978-0-387-85820-3_8 doi: 10.1007/978-0-387-85820-3_8.

[24]

Gang, L. and Tianyong, H., User-based question recommendation for question answering system. International Journal of Information and Education Technology, 2012, 2(3): 243–246. https://doi.org/10.7763/IJIET.2012.V2.120 doi: 10.7763/IJIET.2012.V2.120.

[25]

Shah, K., Zafar, A. and Irfan, U., Recommender systems: Issues, challenges, and research opportunities. Information science and applications (ICISA) 2016, 2016, 1179–1189. https://doi.org/10.1007/978-981-10-0557-2_112 doi: 10.1007/978-981-10-0557-2_112.

[26]

Ming, Z., De-sheng, Z., Ran, T., You-Qun, S., Xiang, Y. and Qian, W., Top-N collaborative filtering recommendation algorithm based on knowledge graph embedding. Proceedings of the 14th International Conference of the Knowledge Management in Organizations, 2019, 122–134. https://doi.org/10.1007/978-3-030-21451-7_11 doi: 10.1007/978-3-030-21451-7_11.

Figure 1.  Framework for causal deep learning (CDL) (Cs is the student input embedded with causal interventions; Cs is the exercise input embedded with causal interventions)
Figure 3.  Influence of the length of the knowledge path
Figure 4.  RMSE with different embedding sizes
Figure 5.  RMSE with different epochs
Figure 6.  Impact of interaction layers
Table 1.  Exercise-knowledge matrix
Exercise Knowledge
k1 k2 k3 k4 k5 k6 k7 k8 k9 k10 kN
E1 1 1 0 1 0 0 0 0 0 0 0
E2 0 1 1 1 0 0 0 0 0 0 0
Ei 0 0 0 0 0 0 1 1 0 1 0
Exercise Knowledge
k1 k2 k3 k4 k5 k6 k7 k8 k9 k10 kN
E1 1 1 0 1 0 0 0 0 0 0 0
E2 0 1 1 1 0 0 0 0 0 0 0
Ei 0 0 0 0 0 0 1 1 0 1 0
Table 2.  Summary of the PAM database
Types of PAM exercises
Multiple choice Judgement Filling the blank Calculation
917 326 384 591
Types of PAM exercises
Multiple choice Judgement Filling the blank Calculation
917 326 384 591
Table 3.  Summary of the PAM and Algebra 2005-2006 datasets for experiments
Dataset Number of students Number of exercises Knowledge concepts Records
PAM 450 2218 368 1264
Algebra 2005-2006 300 1085 437 3000
Dataset Number of students Number of exercises Knowledge concepts Records
PAM 450 2218 368 1264
Algebra 2005-2006 300 1085 437 3000
Table 4.  RMSE and comparison
Method Algebra 2005-2006 PAM
RMSE CDL improvement RMSE CDL improvement
User-CF 0.8441 10.95% 0.8718 14.44%
KS-CF 0.8033 6.42% 0.7989 6.63%
DKT+ 0.7892 4.75% 0.7602 1.88%
KGEB-CF 0.7768 3.23% 0.7633 2.28%
CDL 0.7617 - 0.7459 -
Average improvement 6.33% 6.31%
Method Algebra 2005-2006 PAM
RMSE CDL improvement RMSE CDL improvement
User-CF 0.8441 10.95% 0.8718 14.44%
KS-CF 0.8033 6.42% 0.7989 6.63%
DKT+ 0.7892 4.75% 0.7602 1.88%
KGEB-CF 0.7768 3.23% 0.7633 2.28%
CDL 0.7617 - 0.7459 -
Average improvement 6.33% 6.31%
Table 5.  Comparison of precision and recall on PAM
Method PAM
P@5 CDL improvement P@10 CDL improvement R@5 CDL improvement R@10 CDL improvement
User-CF 0.493 15.82% 0.481 11.43% 0.049 14.29% 0.079 10.13%
KS-CF 0.514 11.09% 0.496 8.06% 0.049 14.29% 0.081 7.41%
DKT+ 0.529 7.94% 0.497 7.85% 0.053 5.67% 0.085 2.35%
KGEB-CF 0.547 4.39% 0.512 4.69% 0.054 3.70% 0.085 2.35%
CDL 0.571 - 0.536 - 0.056 - 0.087 -
Average improvement 9.81% 8.01% 9.49% 5.56%
Method PAM
P@5 CDL improvement P@10 CDL improvement R@5 CDL improvement R@10 CDL improvement
User-CF 0.493 15.82% 0.481 11.43% 0.049 14.29% 0.079 10.13%
KS-CF 0.514 11.09% 0.496 8.06% 0.049 14.29% 0.081 7.41%
DKT+ 0.529 7.94% 0.497 7.85% 0.053 5.67% 0.085 2.35%
KGEB-CF 0.547 4.39% 0.512 4.69% 0.054 3.70% 0.085 2.35%
CDL 0.571 - 0.536 - 0.056 - 0.087 -
Average improvement 9.81% 8.01% 9.49% 5.56%
Table 6.  Comparison of precision and recall on Algebra 2005-2006
Method Algebra 2005-2006
P@5 CDL improvement P@10 CDL improvement R@5 CDL improvement R@10 CDL improvement
User-CF 0.502 8.23% 0.496 6.65% 0.048 12.50% 0.069 14.50%
KS-CF 0.518 5.60% 0.512 3.32% 0.048 12.50% 0.072 9.72%
DKT+ 0.532 2.82% 0.516 2.52% 0.050 8.00% 0.077 2.78%
KGEB-CF 0.538 1.67% 0.523 1.15% 0.053 2.00% 0.074 6.76%
CDL 0.547 - 0.529 - 0.054 - 0.079 -
Average improvement 4.58% 3.41% 8.75% 8.44%
Method Algebra 2005-2006
P@5 CDL improvement P@10 CDL improvement R@5 CDL improvement R@10 CDL improvement
User-CF 0.502 8.23% 0.496 6.65% 0.048 12.50% 0.069 14.50%
KS-CF 0.518 5.60% 0.512 3.32% 0.048 12.50% 0.072 9.72%
DKT+ 0.532 2.82% 0.516 2.52% 0.050 8.00% 0.077 2.78%
KGEB-CF 0.538 1.67% 0.523 1.15% 0.053 2.00% 0.074 6.76%
CDL 0.547 - 0.529 - 0.054 - 0.079 -
Average improvement 4.58% 3.41% 8.75% 8.44%
Table 7.  Comparison of performances of CDL with/without causal inference
Dataset Metric Method
CDL-Without-CI CDL-CI(CDL)
PAM P@5 $ \uparrow $ 0.572 0.582
P@10 $ \uparrow $ 0.507 0.545
R@5 $ \uparrow $ 0.052 0.058
R@10 $ \uparrow $ 0.078 0.091
Algebra 2005-2006 P@5 $ \uparrow $ 0.569 0.578
P@10 $ \uparrow $ 0.499 0.539
R@5 $ \uparrow $ 0.051 0.055
R@10 $ \uparrow $ 0.073 0.089
Dataset Metric Method
CDL-Without-CI CDL-CI(CDL)
PAM P@5 $ \uparrow $ 0.572 0.582
P@10 $ \uparrow $ 0.507 0.545
R@5 $ \uparrow $ 0.052 0.058
R@10 $ \uparrow $ 0.078 0.091
Algebra 2005-2006 P@5 $ \uparrow $ 0.569 0.578
P@10 $ \uparrow $ 0.499 0.539
R@5 $ \uparrow $ 0.051 0.055
R@10 $ \uparrow $ 0.073 0.089
[1]

Ziju Shen, Yufei Wang, Dufan Wu, Xu Yang, Bin Dong. Learning to scan: A deep reinforcement learning approach for personalized scanning in CT imaging. Inverse Problems and Imaging, 2022, 16 (1) : 179-195. doi: 10.3934/ipi.2021045

[2]

Yudong Li, Yonggang Li, Bei Sun, Yu Chen. Zinc ore supplier evaluation and recommendation method based on nonlinear adaptive online transfer learning. Journal of Industrial and Management Optimization, 2021  doi: 10.3934/jimo.2021193

[3]

Ning Zhang, Qiang Wu. Online learning for supervised dimension reduction. Mathematical Foundations of Computing, 2019, 2 (2) : 95-106. doi: 10.3934/mfc.2019008

[4]

Ana Rita Nogueira, João Gama, Carlos Abreu Ferreira. Causal discovery in machine learning: Theories and applications. Journal of Dynamics and Games, 2021, 8 (3) : 203-231. doi: 10.3934/jdg.2021008

[5]

Christopher Oballe, David Boothe, Piotr J. Franaszczuk, Vasileios Maroulas. ToFU: Topology functional units for deep learning. Foundations of Data Science, 2021  doi: 10.3934/fods.2021021

[6]

Richard Archibald, Feng Bao, Yanzhao Cao, He Zhang. A backward SDE method for uncertainty quantification in deep learning. Discrete and Continuous Dynamical Systems - S, 2022, 15 (10) : 2807-2835. doi: 10.3934/dcdss.2022062

[7]

Shuhua Wang, Zhenlong Chen, Baohuai Sheng. Convergence of online pairwise regression learning with quadratic loss. Communications on Pure and Applied Analysis, 2020, 19 (8) : 4023-4054. doi: 10.3934/cpaa.2020178

[8]

Martin Benning, Elena Celledoni, Matthias J. Ehrhardt, Brynjulf Owren, Carola-Bibiane Schönlieb. Deep learning as optimal control problems: Models and numerical methods. Journal of Computational Dynamics, 2019, 6 (2) : 171-198. doi: 10.3934/jcd.2019009

[9]

Nicholas Geneva, Nicholas Zabaras. Multi-fidelity generative deep learning turbulent flows. Foundations of Data Science, 2020, 2 (4) : 391-428. doi: 10.3934/fods.2020019

[10]

Miria Feng, Wenying Feng. Evaluation of parallel and sequential deep learning models for music subgenre classification. Mathematical Foundations of Computing, 2021, 4 (2) : 131-143. doi: 10.3934/mfc.2021008

[11]

Govinda Anantha Padmanabha, Nicholas Zabaras. A Bayesian multiscale deep learning framework for flows in random media. Foundations of Data Science, 2021, 3 (2) : 251-303. doi: 10.3934/fods.2021016

[12]

Aude Hofleitner, Tarek Rabbani, Mohammad Rafiee, Laurent El Ghaoui, Alex Bayen. Learning and estimation applications of an online homotopy algorithm for a generalization of the LASSO. Discrete and Continuous Dynamical Systems - S, 2014, 7 (3) : 503-523. doi: 10.3934/dcdss.2014.7.503

[13]

Roberto C. Alamino, Nestor Caticha. Bayesian online algorithms for learning in discrete hidden Markov models. Discrete and Continuous Dynamical Systems - B, 2008, 9 (1) : 1-10. doi: 10.3934/dcdsb.2008.9.1

[14]

Marc Bocquet, Alban Farchi, Quentin Malartic. Online learning of both state and dynamics using ensemble Kalman filters. Foundations of Data Science, 2021, 3 (3) : 305-330. doi: 10.3934/fods.2020015

[15]

Soheila Garshasbi, Brian Yecies, Jun Shen. Microlearning and computer-supported collaborative learning: An agenda towards a comprehensive online learning system. STEM Education, 2021, 1 (4) : 225-255. doi: 10.3934/steme.2021016

[16]

Marc Bocquet, Julien Brajard, Alberto Carrassi, Laurent Bertino. Bayesian inference of chaotic dynamics by merging data assimilation, machine learning and expectation-maximization. Foundations of Data Science, 2020, 2 (1) : 55-80. doi: 10.3934/fods.2020004

[17]

Ran Ma, Lu Zhang, Yuzhong Zhang. A best possible algorithm for an online scheduling problem with position-based learning effect. Journal of Industrial and Management Optimization, 2021  doi: 10.3934/jimo.2021144

[18]

Alan Beggs. Learning in monotone bayesian games. Journal of Dynamics and Games, 2015, 2 (2) : 117-140. doi: 10.3934/jdg.2015.2.117

[19]

Yangyang Xu, Wotao Yin, Stanley Osher. Learning circulant sensing kernels. Inverse Problems and Imaging, 2014, 8 (3) : 901-923. doi: 10.3934/ipi.2014.8.901

[20]

Christian Soize, Roger Ghanem. Probabilistic learning on manifolds. Foundations of Data Science, 2020, 2 (3) : 279-307. doi: 10.3934/fods.2020013

 Impact Factor: 

Metrics

  • PDF downloads (52)
  • HTML views (70)
  • Cited by (0)

[Back to Top]