Udvidet returret til d. 31. januar 2025

Bøger af Tolga Topal

Filter
Filter
Sorter efterSorter Populære
  • af Tolga Topal
    570,95 kr.

    Thèse de Bachelor de l¿année 2013 dans le domaine Informatique - Software, note: 70, , cours: Systèmes d'exploitation, langue: Français, résumé: Ce travail de fin de cycle introduit les appels systèmes sous l'environnement GNU/Linux. L'objectif est de rendre le fonctionnement de ces appels plus transparent à l'intéressé. Le sujet est abordé tant de manière théorique que pratique.La partie théorique permet d'identifier et comprendre les différents acteurs qui entrent en jeu lors d'un appel système. On part de l'architecture de l'ordinateur pour remonter jusqu'au système d'exploitation. Plus précisément, on démarre du niveau matériel pour atteindre la couche logicielle, ceci en abordant à chaque étape les éléments concernant un appel système.La partie pratique présente dans un premier temps, les outils pouvant être utilisés pour l'analyse d'appels systèmes et ensuite, l'analyse de deux applications.L'étude des applications aborde également la manière d'analyser des programmes en général, et ce notamment par l'utilisation d'un debugger. Finalement, il est fait quelques implémentations d'appels systèmes en langage assembleur. Cela permet d'observer le lien entre la couche matérielle et logicielle via l'utilisation des registres CPU. Si l'on parle d'introduction au sujet, c'est que celui-ci est sufisamment complexe et prétendre couvrir l'entièreté du domaine serait illusoire.

  • af Tolga Topal
    389,95 kr.

    Technical Report from the year 2015 in the subject Computer Science - Software, grade: 12, , course: Operating systems, language: English, abstract: This report focuses on the dependency resolution mechanism between modules for the Linux kernel. The reasoning concerns how to express dependency relation in propositional logic, based on different Linux kernel modules. It is around this topic that further development will be held. The reasoning will concern how to express this dependency relation in propositional logic.To establish a development, an analysis of the kbuild system is performed. The goal is to identify how and what are the elements that take part in the dependency tracking mechanism.The kbuild is a framework providing tools to construct the kernel. It can be declined into two main component : the kconfig files and makefiles. These are the elements that are responsible for handling dependency.Logic is used to express a proof i.e., the correctness of a reasoning. To do so, a proof assistant viz., Coq is used. A decision procedure is a mechanism that resolves a problem by answering it using yes or no.

  • af Tolga Topal
    175,95 kr.

    Master's Thesis from the year 2022 in the subject Computer Sciences - Artificial Intelligence, grade: 7.50, Universidad de Alcalá, course: Artificial Intelligence and Deep Learning, language: English, abstract: Vision Transformers (ViT) are neural model architectures that compete and exceed classical convolutional neural networks (CNNs) in computer vision tasks. ViT's versatility and performance is best understood by proceeding with a backward analysis. In this study, we aim to identify, analyse and extract the key elements of ViT by backtracking on the origin of Transformer neural architectures (TNA). We hereby highlight the benefits and constraints of the Transformer architecture, as well as the foundational role of self- and multi-head attention mechanisms. We now understand why self-attention might be all we need. Our interest of the TNA has driven us to consider self-attention as a computational primitive. This generic computation framework provides flexibility in the tasks that can be performed by the Transformer. After a good grasp on Transformers, we went on to analyse their vision-applied counterpart, namely ViT, which is roughly a transposition of the initial Transformer architecture to an image-recognition and -processing context.When it comes to computer vision, convolutional neural networks are considered the go to paradigm. Because of their proclivity for vision, we naturally seek to understand how ViT compared to CNN. It seems that their inner workings are rather different.CNNs are built with a strong inductive bias, an engineering feature that provides them with the ability to perform well in vision tasks. ViT have less inductive bias and need to learn this (convolutional filters) by ingesting enough data. This makes Transformer-based architecture rather data-hungry and more adaptable.Finally, we describe potential enhancements on the Transformer with a focus on possible architectural extensions. We discuss some exciting learning approaches in machine learning. Our last part analysis leads us to ponder on the flexibility of Transformer-based neural architecture. We realize and argue that this feature might possibility be linked to their Turing-completeness.

Gør som tusindvis af andre bogelskere

Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.