We have a series of unsupervised learning studies, called SCI (Seminars/Studies for CAMEL members' Interest). Currently, we focus one three research and study topics: i) computer architecture, ii) operating systems and iii) field-programmable gate array (FPGA).

While sciARCH and sciOS cover top-notch research topics published in tier-1 conferences in computer science and engineering, sciFPGA deals with fundamental FPGA-related topics and performs practical training by implementing OpenNVM FPGA-based controllers. If you are not familiar with CS/CE style publications and research activities, please refer the "conference list" reported KIISE or the following "for new students" section. To join each seminar or study groups, please contact


Computer architecture and system researchers are traditionally accustomed with a conference-centric publication system. The acceptance rate of top-tier conferences in these fields is around 10% ~ 20%, decided by a National Science Foundation (NSF) panel style, 20-person program committee. The submitted papers usually have 10~12 pages in length with detailed results, and go through about five double-blind reviews and a rebuttal process (not all conferences provide the chance for a rebuttal, but most top-conferences do) with top experts on the topic.

Every year we will target four top-tier architecture conferences (ISCA, MICRO, ASPLOS and HPCA) and system conferences (SIGMETRICS, PACT, SC and USENIX). Note that these conferences deal with issues and research related to microarchitecture, large-scale computing system, programming languages, and operating systems rather than circuit design or electron device issues. If you are in any field in computer science but have no idea on the top-tier conferences of your research area, please check the following first tier and second tier conference list, which are published by KIISE.

There are 215 second tier conferences that guarantee top 50% of SCI journal. Among them, this list selects 64 conferences that can be the top for each computer science research area. The first tier conference list published by KIISE covers:

  1. Algorithm/Theory
  2. Artificial Intelligence / Machine Learning
  3. Computer Vision & Pattern Recognition
  4. Natural Language Processing
  5. Computer Architecture
  6. Operating Systems / Real-Time Systems
  7. Computer Graphics & Human-Computer Interaction
  8. Computer Network
  9. Distributed and Parallel Computing
  10. Database
  11. Data Mining / Information Retrieval
  12. Programming Language / Compiler
  13. Computer Security / Information Privacy
  14. Software Engineering

For each computer science field, this first tier conference list includes 3~6 conference venues, and we mainly target to submit our work into the top conference venues listed at i) Computer Architecture ii) Operating Systems / Real-Time Systems and iii) Distributed and Parallel Computing.

While it would be beneficial to understand entire computing stack from the bottom to the top if your background is closer and more specialized to a much lower-level research topic such as analog/digital circuit, VLSI and surface sensing technologies, we strongly encourage you to be familiar with the architectural approaches and system solutions before landing in computer architecture and system research area -- all conferences we are targeting have proceedings, and many papers associated to them are available at online.

Back to Table of Contents


Please note that there are many research resources and tools you need to be familiar with, in addition to the following items. These items are not related to research itself but minimum requirements to keep your body and soul together in our research fields.


We typically use many different types of simulation frameworks for computer architecture and system research. While appropriate simulation methodologies might be varying based on your research topics, we recommend you to be familiar with diverse simulation methodologies. Most popular simulators we use are as follows:

All these simulators have open-source license, and all the framework codes are available for free download. Managing/learning these simulation tools would require solid programming knowledge as well as strong background of computer architectures/systems. We often modify these simulators to exam your conceptual idea or new approach. We also integrate these frameworks with one another to simulate a larger computing system and see more details (e.g., power, energy and data movements between heterogeneous devices).


In addition to simulation-based studies, we often analyze and characterize diverse real products and memory devices (e.g, SSD, GPU, etc.). This fundamentally requires a strong programming skill and deep knowledge of the underlying devices themselves. For example, to evaluate those devices in a better way, you might need to develop a microbenchmark or characterization tool that interacts with them through various workloads and access patterns. On the other hand, since handling these devices is also related to manage device drivers at some extent, it is required to have knowledge of OS architecture and kernel implementation such as process creation, context-switching, memory allocation, synchronization mechanisms, interprocess communication, I/O buffering, and file system -- this eye-catch kernel map would be helpful to understand kernel driver issues and check the corresponding kernel source codes. We are using both window device driver model ( WDM) and linux kernel driver model.


You might want to be aware of diverse benchmark/evaluation tools (e.g., SPEC, Intel Iometer, unix disk I/O and some other parallel I/O tools ), runtime libraries (e.g., boost, MapReduce, MPI, GPU-CUDA) and version controls (e.g., git and svn). All these tools are often used for simulation and empirical evaluation studies. In addition, it would be good if you can freely use other script programming languages like Python ( Usually, both simulation and empirical evaluation generate a tremendous amount of data, and therefore analysis on such data can be readily prone to human errors and take quality time away from your research. The script tools help you accelerate analyze data by automatically parsing the structure of raw data and collecting them. Lastly, linux performance and profile tools (please refer Brendan Gregg) would help you to reduce the efforts developing a performance measurement and evaluation tool.

Back to Table of Contents




Back to Table of Contents