Résumé | Classifiers today face numerous challenges, including overfitting, high computational costs, low accuracy, imbalanced datasets, and lack of interpretability. Additionally, traditional methods often struggle with noisy or missing data. To address these issues, we propose novel classification methods based on feature partitioning and outranking measures. Our approach eliminates the need for prior domain knowledge by automatically learning feature intervals directly from the data. These intervals capture key patterns, enhancing adaptability and insight. To improve robustness, we incorporate outranking measures, which reduce the impact of noise and uncertainty through pairwise comparisons of alternatives across features. We evaluate our classifiers on multiple UCI repository datasets and compare them with established methods, including k-Nearest Neighbors (k-NN), Support Vector Machine (SVM), Random Forest (RF), Neural Networks (NNs), Naive Bayes (NB), and Nearest Centroid (NC). The results demonstrate that our methods are robust to imbalanced datasets and irrelevant features, achieving comparable or superior performance in many cases. Furthermore, our classifiers offer enhanced interpretability while maintaining high predictive accuracy. |
---|