Last update Maggio 2, 2016 3:19 PM

 

drmlog  


Personal Website


Benvenuti / Welcome
     

Pagina Iniziale Home Page

Informazioni About Me

Curriculum Vitae Resume
Guide Tutorials
Articoli Articles
Libri Books

Contatti Contacts
     
 
     
 
 
LAVORI RECENTI / RECENT WORKS
COLLABORAZIONI / COOPERATIONS
LIBRI / BOOKS
LIBRI DIGITALI / EBOOKS
ARTICOLI / ARTICLES
GUIDE / TUTORIALS
LAVORI RECENTI / RECENT WORKS
Accademici / Academic

Using Neural Word Embeddings to Model User Behavior and Detect User Segments
Roberto Saia, Ludovico Boratto, Salvatore Carta, Gianni Fenu
Knowledge-Based Systems (KBS), Elsevier Journal
Abstract: Modeling user behavior to detect segments of users to target and to whom address ads (behavioral targeting) is a problem widely-studied in the literature. Various sources of data are mined and modeled in order to detect these segments, such as the queries issued by the users. In this paper, we first show the need for a user segmentation system to employ reliable user preferences, since nearly half of the times users reformulate their queries in order to satisfy their information need. Then we propose a method that analyzes the description of the items positively evaluated by the users, and extract a vector representation of the words in these descriptions (word embeddings). Since it is widely-known that users tend to choose items of the same categories, our approach is designed to avoid the so-called preference stability who would associate the users to trivial segments. Moreover, we make sure that the interpretability of the generated segments is a characteristic offered to the advertisers who will use these segments. We performed different sets of experiments on a large real-world dataset, which validated our approach and showed its capability to produce effective segments.

Binary Sieves: Toward a Semantic Approach to User Segmentation for Behavioral Targeting
Roberto Saia, Ludovico Boratto, Salvatore Carta, Gianni Fenu
Future Generation Computer Systems (FGCS), Elsevier Journal
Abstract: Behavioral targeting is the process of addressing ads to a specific set of users. The set of target users is detected from a segmentation of the user set, based on their interactions with the website (pages visited, items purchased, etc.). Recently, in order to improve the segmentation process, the semantics behind the user behavior has been exploited, by analyzing the queries issued by the users. However, nearly half of the times users need to reformulate their queries in order to satisfy their information need. In this paper, we tackle the problem of semantic behavioral targeting considering reliable user preferences, by performing a semantic analysis on the descriptions of the items positively rated by the users. We also consider widely-known problems, such as the interpretability of a segment, and the fact that user preferences are usually stable over time, which could lead to a trivial segmentation. In order to overcome these issues, our approach allows an advertiser to automatically extract a user segment by specifying the interests that she/he wants to target, by means of a novel boolean algebra; the segments are composed of users whose evaluated items are semantically related to these interests. This leads to interpretable and non-trivial segments, built by using reliable information. Experimental results confirm the effectiveness of our approach at producing users segments.

A semantic approach to remove incoherent items from a user profile and improve the accuracy of a recommender system
Roberto Saia, Ludovico Boratto, Salvatore Carta
Journal of Intelligent Information Systems (JIIS), Springer Journal
Abstract: Recommender systems usually suggest items by exploiting all the previous interactions of the users with a system (e.g., in order to decide the movies to recommend to a user, all the movies she previously purchased are considered). This canonical approach sometimes could lead to wrong results due to several factors, such as a change in user preferences over time, or the use of her account by third parties. This kind of incoherence in the user profiles defines a lower bound on the error the recommender systems may achieve when they generate suggestions for a user, an aspect known in literature as magic barrier. This paper proposes a novel dynamic coherence-based approach to define the user profile used in the recommendation process. The main aim is to identify and remove from the previously evaluated items those not semantically adherent to the the others, in order to make a user profile as close as possible to the user's real preferences, solving the aforementioned problems. Moreover, reshaping the user profile in such a way leads to great advantages in terms of computational complexity, since the number of items considered during the recommendation process is highly reduced. The performed experiments show the effectiveness of our approach to remove the incoherent items from a user profile, increasing the recommendation accuracy.

A Proactive Time-frame Convolution Vector (TFCV) Technique to Detect Frauds Attempts in E-commerce Transactions
Roberto Saia, Ludovico Boratto, Salvatore Carta
Proceeding of the International Conference on Communication and Information Processing (ICCIP), Tokyo, Japan, Published in International Journal of e-Education, e-Business, e-Management and e-Learning (IJEEEE)
Abstract: Any business that carries out activities on the Internet and accepts payments through debit or credit cards, also implicitly accepts all the risks related to them, like for some transaction to be fraudulent. Although these risks can lead to significant economic losses, nearly all the companies continue to use these powerful instruments of payment, as the benefits derived from them will outweigh the potential risks involved. The design of effective strategies able to face this problem is however particularly challenging, due to several factors, such as the heterogeneity and the non stationary distribution of the data stream, as well as the presence of an imbalanced class distribution. To complicate the problem, there is the scarcity of public datasets for confidentiality issues, which does not allow researchers to verify the new strategies in many data contexts. Differently from almost all strategies at the state of the art, instead of producing a unique model based on the past transactions of the users, in this paper we present an approach that generates a set of models (behavioral patterns) that allow us to evaluate a new transaction, by considering the behavior of the user in different temporal frames of her/his history. The size of the temporal frames and the number of levels (granularity) used to discretize the values in the behavioral patterns, can be adjusted in order to adapt the system sensitivity to the operating environment. Considering that our models do not need to be trained with both the past legitimate and fraudulent transactions of a user, since they use only the legitimate ones, we can operate in a proactive manner, by detecting fraudulent transactions that have never occurred in the past. Such a way to proceed also overcomes the data imbalance problem that afflicts the machine learning approaches at the state of the art. The evaluation of the proposed approach is performed by comparing it with one of the most performant approaches at the state of the art as Random Forests, using a real-world credit card dataset.
A Latent Semantic Pattern Recognition Strategy for an Untrivial Targeted Advertising
Roberto Saia, Ludovico Boratto, Salvatore Carta
Proceedings of the 4th IEEE International Congress (BigData), New York, United States of America
Abstract: target definition is a process aimed at partitioning the potential audience of an advertiser into several classes, according to specific criteria. Almost all the existing approaches take into account only the explicit preferences of the users, without considering the hidden semantics embedded in their choices, so the target definition is affected by widely-known problems. One of the most important is that easily understandable segments are not effective for marketing purposes due to their triviality, whereas more complex segmentations are hard to understand. In this paper we propose a novel segmentation strategy able to uncover the implicit preferences of the users, by studying the semantic overlapping between the classes of items positively evaluated by them and the rest of classes. The main advantages of our proposal are that the desired target can be specified by the advertiser, and that the set of users is easily described by the class of items that characterizes them; this means that the complexity of the semantic analysis is hidden to the advertiser, and we obtain an interpretable and non-trivial user segmentation, built by using reliable information. Experimental results confirm the effectiveness of our approach in the generation of the target audience.
Introducing a Weighted Ontology to Improve the Graph-based Semantic Similarity Measures
Roberto Saia, Ludovico Boratto, Salvatore Carta
Proceeding of the 6th International Conference on Networking and Information Technology (ICNIT), Tokyo, Japan. Published in International Journal of Signal Processing Systems (IJSPS)
Abstract: The semantic similarity measures are designed to compare terms that belong to the same ontology. Many of these are based on a graph structure, such as the well-known lexical database for the English language, named WordNet, which groups the words into sets of synonyms called synsets. Each synset represents a unique vertex of the WordNet semantic graph, through which is possible to get information about the relations between the different synsets. The literature shows several ways to determine the similarity between words or sentences through WordNet (e.g., by measuring the distance among the words, by counting the number of edges between the correspondent synsets), but almost all of them do not take into account the peculiar aspects of the used dataset. In some contexts this strategy could lead toward bad results, because it considers only the relationship between vertexes of the WordNet semantic graph, without giving them a different weight based on the synsets frequency within the considered datasets. In other words, common synsets and rare synsets are valued equally. This could create problems in some applications, such as those of recommender systems, where WordNet is exploited to evaluate the semantic similarity between the textual descriptions of the items positively evaluated by the users, and the descriptions of the other ones not evaluated yet. In this context, we need to identify the user preferences as best as possible, and not taking into account the synsets frequency, we risk to not recommend certain items to the users, since the semantic similarity generated by the most common synsets present in the description of other items could prevail. This work faces this problem, by introducing a novel criterion of evaluation of the similarity between words (and sentences) that exploits the WordNet semantic graph, adding to it the weight information of the synsets. The effectiveness of the proposed strategy is verified in the recommender systems context, where the recommendations are generated on the basis of the semantic similarity between the items stored in the user profiles, and the items not evaluated yet..
Multiple Behavioral Models: a Divide and Conquer Strategy to Fraud Detection in Financial Data Streams
Roberto Saia, Ludovico Boratto, Salvatore Carta
Proceeding of the 7th International Conference on Knowledge Discovery and Information Retrieval (KDIR), Lisbon, Portugal
Abstract: The exponential and rapid growth of the E-commerce based both on the new opportunities offered by the Internet, and on the spread of the use of debit or credit cards in the online purchases, has strongly increased the number of frauds, causing large economic losses to the involved businesses. The design of effective strategies able to face this problem is however particularly challenging, due to several factors, such as the heterogeneity and the non stationary distribution of the data stream, as well as the presence of an imbalanced class distribution. To complicate the problem, there is the scarcity of public datasets for confidentiality issues, which does not allow researchers to verify the new strategies in many data contexts. Differently from the canonical state-of-the-art strategies, instead of defining a unique model based on the past transactions of the users, we follow a Divide and Conquer strategy, by defining multiple models (user behavioral patterns), which we exploit to evaluate a new transaction, in order to detect potential attempts of fraud. We can act on some parameters of this process, in order to adapt the models sensitivity to the operating environment. Considering that our models do not need to be trained with both the past legitimate and fraudulent transactions of a user, since they use only the legitimate ones, we can operate in a proactive manner, by detecting fraudulent transactions that have never occurred in the past. Such a way to proceed also overcomes the data imbalance problem that afflicts the machine learning approaches. The evaluation of the proposed approach is performed by comparing it with one of the most performant approaches at the state of the art as Random Forests, using a real-world credit card dataset.
Popularity Does Not Always Mean Triviality: Introduction of Popularity Criteria to Improve the Accuracy of a Recommender System
Roberto Saia, Ludovico Boratto, Salvatore Carta
Proceedings of the International Conference on Computer Science and Information Technology (ICCSIT), Amsterdam, Netherland, Published in Journal of Computers (JCP)
Abstract: The main goal of a recommender system is to provide suggestions, by predicting a set of items that might interest the users. In this paper, we will focus on the role that the popularity of the items can play in the recommendation process. The main idea behind this work is that if an item with a high predicted rating for a user is very popular, this information about its popularity can be effectively employed to select the items to recommend. Indeed, by merging a high predicted rating with a high popularity, the effectiveness of the produced recommendations would increase with respect to a case in which a less popular item is suggested. The proposed strategy aims to employ in the recommendation process new criteria based on the items' popularity, by measuring how much it is preferred by users. Through a post-processing approach, we use this metric to extend one of the most performing state-of-the-art recommendation techniques, i.e., SVD++. The effectiveness of this hybrid strategy of recommendation has been verified through a series of experiments, which show strong improvements in terms of accuracy w.r.t. SVD++.
Exploiting the Evaluation Frequency of the Items to Enhance the Recommendation Accuracy
Roberto Saia, Ludovico Boratto, Salvatore Carta
Proceedings of the International Conference on Computer Applications & Technology (ICCAT), Rome, Italy
Abstract: The main task of a recommender system is to suggest a list of items that users may be interested in. In this paper, we focus on the role that the popularity of the items plays in the recommendation process. If on the one hand, considering only the most popular items generates trivial recommendations, on the other hand, not taking in consideration the item popularity could lead to a non-optimal performance of a system, since it does not differentiate the items, giving them the same weight during the recommendation process. Therefore, we could risk to exclude from the recommendations some popular items that would have a high probability of being preferred by the users, suggesting instead others that, despite meeting the selection criteria, have less chance to be preferred. The proposed strategy aims to employ in the recommendation process new criteria based on the items' popularity, by introducing two novel metrics. Through the first metric we evaluate the semantic relevance of an item with respect to the user profile, while through the second metric, we measure how much it is preferred by users. Through a post-processing approach, we use these metrics in order to extend one of the most performing state-of-the-art recommendation techniques: SVD++. The effectiveness of this hybrid strategy of recommendation has been verified through a series of experiments, which show strong improvements in terms of accuracy w.r.t. SVD++.

A New Perspective on Recommender Systems: a Class Path Information Model
Roberto Saia, Ludovico Boratto, Salvatore Carta
Proceedings of the Science and Information Conference (SAI), London, United Kingdom

Abstract: recommender systems perform suggestions for items that might interest the users. The recommendation process is usually performed at the level of a single item, i.e., for each item not evaluated by a user, classic approaches look for the rating given by similar users for that item, or for an item with similar content. This leads to the so-called overspecialization/serendipity problem, in which the recommended items are trivial and users do not come across surprising items. In this paper we first show that the preferences of the users are actually distributed over a small set of classes of items, leading the recommended items to be too similar with the ones already evaluated. We also present a novel representation model, named Class Path Information (CPI), able to express the current and future preferences of the users in terms of a ranked set of classes of items. Our approach to user preferences modeling is based on a semantic analysis of the items evaluated by the users, in order to extend the ground truth and predict where the future preferences of the users will go. Experimental results show that our approach, by including in the CPI model the same classes predicted by a state-of-the-art recommender system, is able to accurately model the preferences of the users in terms of classes and not in terms of single items, allowing recommender systems to suggest non trivial items.

Semantic Coherence-based User Profile Modeling in the Recommender Systems Context
Roberto Saia, Ludovico Boratto, Salvatore Carta
Proceedings of the 6th International Conference on Knowledge Discovery and Information Retrieval (KDIR), Rome, Italy

Abstract: recommender systems usually produce their results to the users based on the interpretation of the whole historic interactions of these. This canonical approach sometimes could lead to wrong results due to several factors, such as a changes in work proposes a novel dynamic coherence-based approach that analyzes the information stored in the user profiles based on their coherence. The main aim is to identify and remove from the previously evaluated items those not adherent to the average preferences, in order to make a user profile as close as possible to the user’s real tastes. The conducted experiments show the effectiveness of our approach to remove the incoherent items from a user profile, increasing the recommendation accuracy.
Articoli / Articles
cover1

Hping - Il coltellino svizzero della sicurezza
Il tool perfetto per analizzare e forgiare pacchetti TCP/IP
Pubblicato sulla rivista "Linux Pro", numero 124 del mese di dicembre 2012

Introduzione: grazie alla sua capacità di analizzare e forgiare pacchetti TCP/IP, il software hping rappresenta un vero e proprio coltellino svizzero in ambito sicurezza, in quanto consente di verificare la sicurezza dei dispositivi di protezione, divenendo il compagno insostituibile sia di coloro che hanno il compito di amministrare la sicurezza delle reti sia, purtroppo, degli antagonisti....

"Creating a Fake Wi-Fi Hotspot to Capture Connected Users Information" & "Deceiving Defenses with Nmap Camouflaged Scanning"

Republished in 'Hakin9 Exploiting Software Bible' , June 2012

cover1

Tecniche di mappatura delle reti wireless
A caccia di reti Wi-Fi
Pubblicato sulla rivista "Linux Pro", numero 118 del mese di giugno 2012

Introduzione: la mappatura delle reti wireless è un'attività che coinvolge trasversalmente il settore della sicurezza informatica, in quanto fornisce delle preziose informazioni sia a coloro che hanno il compito di difendere le proprie reti dagli accessi illegittimi, sia agli antagonisti di questi ultimi che, specularmente, si adoperano invece per violarle...

Metti al sicuro la tua LAN
Testiamo la rete evidenziandone tutte le vulnerabilità
Pubblicato sulla rivista "Win Magazine", numero 166 del mese di giugno 2012

Introduzione: sebbene i canonici strumenti di difesa dagli attacchi informatici tipicamente utilizzati sui nostri computer o nelle reti locali (in primis, i firewall) offrano in molti casi una adeguata protezione verso leinsidie provenienti dall'asterno, è doveroso evidenziare che,in talune circostanze, l'utilizzo di questi strumenti potrebbe rilevarsi inefficace...
cover3

Deceiving Networks Defenses
with Nmap Camouflaged Scanning
Published in 'Hakin9 Exploiting Software' , April 2012

Overview: Nmap (contraction of ‘Network Mapper’) is an open-source software designed to rapidly scan both single hosts and large networks. To perform its functionalities Nmap uses particular IP packets (raw-packets) in order to probe what hosts are active on the target network: about these hosts, it is able to ...
cover3

Creating a Fake Wi-Fi Hotspot to Capture Connected Users Information
Use a standard laptop to create a fake open wireless access point
Published in 'Hakin9 Exploiting Software' , March 2012

Overview: we can use a standard laptop to create a fake open wireless access point that allows us to capture a large amount of information about connected users; in certain environments, such as airports or meeting areas, this kind of operation can represent an enormous security threat but, on the other hand, the same approach is a powerful way to check the wireless activity in certain areas ...
cover2

Proactive Network Defence through Simulated Network
How to use some techniques and tools in order to deceive the potential intruders in our network
Published in 'Hakin9 Extra' , February 2012

Overview: a honeypot-based solution realizes a credible simulation of a complete network environment where we can add and activate one or more virtual hosts (the honeypots) in various configuration: a network of honeypot systems is named honeynet...
cover2

From the Theory of Prime Numbers to Quantum Cryptography
The history of a successful marriage between theoretical mathematics and the modern computer science
Published in 'Hakin9 Extra' , January 2012

Overview: the typical ‘modus operandi’ of the computer science community is certainly more oriented to pragmatism than to fully understanding what underlies the techniques and tools used. This article will try to fill one of these gaps by showing the close connection between the mathematics and modern cryptographic systems. Without claiming to achieve full completeness, the goal here is to expose some of the most important mathematical theories that regulate the operation of modern cryptography...

cover2

Rsyslog: funzioni avanzate e grande affidabilità
Logging avanzato con Rsyslog
Pubblicato sulla rivista "Linux&C", numero 75 del mese di novembre 2011

Introduzione: Rsyslog è stato scelto dalle maggiori distribuzioni per sostituire il glorioso syslogd, rispetto al quale offre maggiore flessibilità e nuove funzioni...

cover1

La rete è sotto controllo - Regolamentazione e filtraggio dei contenuti
Come implementare un efficiente sistema di gestione basato su Squid e DanSGuardian
Pubblicato sulla rivista "Linux Pro", numero 106 del mese di luglio 2011

Introduzione: le problematiche afferenti il filtraggio e la regolamentazione degli accessi verso una rete esterna da parte degli utenti di una rete locale vengono oggi poste in risalto dalle recenti disposizioni di legge emanate dal garante per la protezione dei dati personali, nuove norme che impongono agli amministratori una rigorosa regolamentazione di tale attività...
COLLABORAZIONI / COOPERATIONS

wiki1

La sicurezza delle reti aziendali ai tempi di Facebook
Progetto wiki promosso da IBM per discutere sul tema della sicurezza informatica e, nello specifico, su come l’utilizzo dei social network si ripercuote sulla sicurezza delle strutture aziendali, nonché sui rischi che ne derivano.

Documento protetto da licenza Creative Commons di tipo "Attribuzione-Non commerciale-Non opere derivate"

Autori: Mario Mazzolin, Simone Riccetti, Cristina Berta, Raoul Chiesa, Angelo Iacubino, Roberto Marmo, Roberto Saia
download
wiki2
La sicurezza delle informazioni nell'era del Web 2.0
Progetto wiki promosso da IBM per discutere sul tema della sicurezza informatica e, nello specifico, di come gli strumenti offerti dal web 2.0 possano essere amministrati senza mettere a repentaglio la sicurezza dei sistemi.

Documento protetto da licenza Creative Commons di tipo "Attribuzione-Non commerciale-Non opere derivate"

Autori: Luca Cavone, Gaetano Di Bello, Angelo Iacubino, Armando Leotta, Roberto Marmo, Mario Mazzolini, Daniele Pauletto, Roberto Saia
download
LIBRI / BOOKS
SIMILARITY AND DIVERSITY
Two Sides of the Same Coin in Data Analysis
Lingua: Inglese
Pagine: 168
Autore: Roberto Saia
ISBN-13: 978-3-659-88315-6
ISBN-10: 3659883158
EAN: 9783659883156
Anno di edizione : 2016
Editore: LAP LAMBERT Academic Publishing
Lingua: Italiano
Pagine: 362 - 17x24
Autore: Roberto Saia
Codice ISBN : 9788882338633
Anno di edizione : 2010
Editore: FAG Milano
Collana: Pro DigitalLifeStyle
Lingua: Italiano
Pagine: 336 - 17x24
Autore: Roberto Saia
Codice ISBN : 9788882337742
Anno di edizione : 2009
Editore: FAG Milano
Collana: Pro DigitalLifeStyle
Lingua: Italiano
Pagine: 448
Autore: Roberto Saia
Codice ISBN : 9788882336912
Anno di edizione : 2008
Editore: FAG Milano
Collana: Pro DigitalLifeStyle
 
LIBRI DIGITALI / EBOOKS
Lingua: Italiano
Pagine: 446
Autore: Roberto Saia
Anno di edizione : 2011
Editore: Manuali.net
Formato: E-Book
Lingua: Italiano
Pagine: 100
Autore: Roberto Saia
Anno di edizione : 2010
Editore: Manuali.net
Formato: E-Book
Lingua: Italiano
Pagine: 86
Autore: Roberto Saia
Anno di edizione : 2010
Editore: Manuali.net
Formato: E-Book
 
ARTICOLI / ARTICLES
05-06-2012 Dalla teoria dei numeri primi alla crittografia quantistica
30-03-2010 Approccio euristico nella sicurezza nel Web semantico
03-03-2010 Vulnerabilità di tipo Cross Site Scripting
03-03-2010 Sicurezza proattiva nel Web di seconda generazione
02-02-2010 Rischi derivanti dall’analisi aggregata dei dati a scarsa valenza individuale
01-06-2008 Information Technology e sicurezza
11-06-2008 Introduzione alla sicurezza informatica
11-01-2005 Autocostruzione di un Firewall Hardware
09-10-2004 SQL Injection Attack Technique
14-07-2004 The Home Computer Security
 
GUIDE / TUTORIALS
Tutorial

#1: Il framework Metasploit

Introduzione: il progetto Metasploit nasce con l'obiettivo di realizzare un prodotto software capace di fornire informazioni sulle vulnerabilità dei sistemi informatici, questo sia al fine di compiere delle operazioni di analisi dello scenario operativo (penetration testing), sia per coadiuvare le fasi di sviluppo di strumenti pensati per la loro difesa... (leggi intero articolo)

Tutorial

#2: La gestione dei permessi in ambiente Linux

Introduzione: La gestione dei permessi nell'ambito dei sistemi operativi multiutente come Linux riveste una grande importanza e, proprio per questa ragione, ciascun sistema rende disponibili alcuni comandi pensati apposta per gestire queste operazioni... (leggi intero articolo)

Tutorial

#3: La maschera dei permessi in ambiente Linux

Introduzione: Uno strumento alquanto prezioso nell'ambito della sicurezza dei sistemi è la cosiddetta maschera dei permessi, strumento con il quale è possibile amministrare i privilegi su file e cartelle. Il suo utilizzo è possibile avvalendosi del comando umask, comando che... (leggi intero articolo)

Tutorial

#4: Penetration Test con Nmap

Introduzione: Una delle più importanti attività in ambito sicurezza informatica è senza dubbio quella che in letteratura informatica prende il nome di “Penetration Test“, una denominazione data a tutte quelle attività che hanno lo scopo di verificare, in modo più o meno approfondito, la sicurezza di una infrastruttura informatica... (leggi intero articolo)

Tutorial

#5: Introduzione al Social Engineering: Phishing e Pharming

Introduzione: Il termine Social Engineering, che nella nostra lingua si traduce come Ingegneria Sociale, indica un modo di operare dell'aggressore basato su azioni di imbroglio e/o persuasione volte ad ottenere informazioni riservate che, solitamente, consentono a chi le mette in opera di accedere illecitamente ad uno o più sistemi... (leggi intero articolo)

Tutorial

#6: Rilevare le intrusioni in una rete wireless con AirSnare

Introduzione: In questo articolo discuteremo di AirSnare, un software che, differentemente da altri prodotti dalle funzionalità analoghe, non richiede particolari competenze tecniche per il suo utilizzo, consentendo a chiunque di effettuare dei controlli mirati a identificare attività non autorizzate sulla propria rete... (leggi intero articolo)

Tutorial

#7: Puntatori nel linguaggio di programmazione C

Introduzione:Uno degli aspetti che risulta più ostico da comprendere a coloro che per la prima volta si avvicinano al linguaggio C, è certamente quello relativo ai puntatori, un potente strumento reso disponibile dal linguaggio, con il quale è possibile compiere numerose operazioni in modo inusuale rispetto... (leggi intero articolo)

Tutorial

#8: Principi di guerra elettronica: attacco, protezione e supporto

Introduzione:Questo articolo costituisce una sorta di curiosa divagazione in merito alle tecnologie wireless, in quanto queste vengono qui chiamate in causa in una loro particolarissima accezione che, certamente, è più attinente al mondo dell'intelligence o, comunque, più in generale... (leggi intero articolo)

Tutorial

#9: Principi di guerra elettronica: tecnologia dei sistemi Tempest

Introduzione: Il termine Tempest identifica un particolare settore che si occupa dello studio delle emissioni elettromagnetiche di alcune parti hardware di un elaboratore, emissioni (si tratta dei campi elettromagnetici generati dalle oscillazioni dei segnali elaborati dai circuiti) che se... (leggi intero articolo)

Tutorial

#10: La Subnet Mask

Introduzione: La Subnet Mask, in italiano "maschera di sottorete", viene adoperata per differenziare, attraverso un'operazione definita "messa in AND", la porzione di indirizzo IP che identifica la rete (Network) da quella che, invece, individua la macchina (Host)... (leggi intero articolo)

Tutorial

#11: Introduzione alla tecnica del Buffer Overflow

Introduzione: La tecnica denominata Buffer Overflow, che nella nostra lingua potrebbe essere tradotta come trabocco del Buffer, è basata sullo sfruttamento di un certo tipo di vulnerabilità presente in alcuni Software; la vulnerabilità in questione è costituita... (leggi intero articolo)

Tutorial

#12: Web of Things: introduzione a Paraimpu

Introduzione: i limiti che per lungo tempo hanno caratterizzato il word wide web sono stati recentemente valicati dal cosiddetto “Web of Things”, un innovativo paradigma di interazione che oltre ai canonici utenti, siti e servizi, coinvolge in rete un enorme numero di dispositivi semplici e complessi... (leggi intero articolo)

 
© 2004/2011 , Roberto Saia  -  All Rights Reserved