Teaching Materials in Port Operations and Conveniences Courses
Main Article Content
Abstract
Abstract. This study aims to develop teaching materials for Port Operations and Facilities courses that are suitable for Sea Transportation DIV cadets at the Surabaya Shipping Polytechnic. This development follows the structured ADDIE Model. The resulting module aims to help students understand the subject material. Upon completion of designing the module, researchers gather feedback from subject matter experts and media experts, resulting in a highly valid module with no need for further revision. Recommendation: This module may require further improvement and assessment of effectiveness in the future. Development Limitations: The study had several limitations, including a focus on feedback from cadets and constraints related to time and resources.
Downloads
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
References
Asaad, R., & Ali, R. (2019). Back Propagation Neural Network(BPNN) and Sigmoid Activation Function in Multi-Layer Networks.Academic Journal Of Nawroz University,8(4), 216. doi: 10.25007/ajnu.v8n4a464.[2] Zhang C., Zhang Z.A, (2010). Survey of Recent Advances in Face Detection.Microsoft Corporation; Albuquerque, NM, USA. TechReport, No. MSR-TR-2010-66.[3] 3.Ekman P., Friesen W., Hager J.(2002). Facial Action Coding System: The Manual on CD ROM.A Human Face; Salt Lake City, UT, USA.[4] Li, M., Zang, S., Zhang, B., Li, S., & Wu, C. (2014). A review of remote sensing image classification techniques: The role of spatial-contextual information.European Journal of Remote Sensing,47(1), 389-411.[5] Kwon, O. W., Chan, K., Hao, J., & Lee, T. W. (2003). Emotion recognition by speech signals. InEighth European Conference on Speech Communication and Technology.[6] Schuller, B., Rigoll, G., & Lang, M. (2003, April). Hidden Markov model-based speech emotion recognition. In2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03).(Vol. 2, pp. II-1). IEEE.[7] El Ayadi, M., Kamel, M. S., & Karray, F. (2011). Survey on speech emotion recognition: Features, classification schemes, and databases.Pattern Recognition,44(3), 572-587.[8] Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., & Taylor, J. G. (2001). Emotion recognition in human-computer interaction.IEEE Signal processing magazine,18(1), 32-80.[9] Nwe, T. L., Foo, S. W., & De Silva, L. C. (2003). Speech emotion recognition using hidden Markov models.Speech communication,41(4), 603-623.[10] Busso, C., Lee, S., & Narayanan, S. (2009). Analysis of emotionally salient aspects of fundamental frequency for emotion detection.IEEE transactions on audio, speech, and language processing,17(4), 582-596